00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2030 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3290 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.052 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.052 The recommended git tool is: git 00:00:00.053 using credential 00000000-0000-0000-0000-000000000002 00:00:00.054 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.088 Fetching changes from the remote Git repository 00:00:00.090 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.138 Using shallow fetch with depth 1 00:00:00.138 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.138 > git --version # timeout=10 00:00:00.193 > git --version # 'git version 2.39.2' 00:00:00.193 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.237 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.237 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.533 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.544 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.554 Checking out Revision 456d80899d5187c68de113852b37bde1201fd33a (FETCH_HEAD) 00:00:03.554 > git config core.sparsecheckout # timeout=10 00:00:03.564 > git read-tree -mu HEAD # timeout=10 00:00:03.579 > git checkout -f 456d80899d5187c68de113852b37bde1201fd33a # timeout=5 00:00:03.599 Commit message: "jenkins/config: Drop WFP25 for maintenance" 00:00:03.599 > git rev-list --no-walk 456d80899d5187c68de113852b37bde1201fd33a # timeout=10 00:00:03.678 [Pipeline] Start of Pipeline 00:00:03.692 [Pipeline] library 00:00:03.694 Loading library shm_lib@master 00:00:03.694 Library shm_lib@master is cached. Copying from home. 00:00:03.712 [Pipeline] node 00:00:03.721 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.723 [Pipeline] { 00:00:03.734 [Pipeline] catchError 00:00:03.736 [Pipeline] { 00:00:03.750 [Pipeline] wrap 00:00:03.760 [Pipeline] { 00:00:03.767 [Pipeline] stage 00:00:03.769 [Pipeline] { (Prologue) 00:00:03.973 [Pipeline] sh 00:00:04.257 + logger -p user.info -t JENKINS-CI 00:00:04.274 [Pipeline] echo 00:00:04.276 Node: WFP8 00:00:04.282 [Pipeline] sh 00:00:04.581 [Pipeline] setCustomBuildProperty 00:00:04.592 [Pipeline] echo 00:00:04.594 Cleanup processes 00:00:04.598 [Pipeline] sh 00:00:04.880 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.880 2985213 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:04.894 [Pipeline] sh 00:00:05.182 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.182 ++ grep -v 'sudo pgrep' 00:00:05.182 ++ awk '{print $1}' 00:00:05.182 + sudo kill -9 00:00:05.182 + true 00:00:05.195 [Pipeline] cleanWs 00:00:05.204 [WS-CLEANUP] Deleting project workspace... 00:00:05.204 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.209 [WS-CLEANUP] done 00:00:05.213 [Pipeline] setCustomBuildProperty 00:00:05.225 [Pipeline] sh 00:00:05.504 + sudo git config --global --replace-all safe.directory '*' 00:00:05.587 [Pipeline] httpRequest 00:00:05.614 [Pipeline] echo 00:00:05.616 Sorcerer 10.211.164.101 is alive 00:00:05.621 [Pipeline] httpRequest 00:00:05.625 HttpMethod: GET 00:00:05.625 URL: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:05.625 Sending request to url: http://10.211.164.101/packages/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:05.642 Response Code: HTTP/1.1 200 OK 00:00:05.643 Success: Status code 200 is in the accepted range: 200,404 00:00:05.643 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:13.278 [Pipeline] sh 00:00:13.563 + tar --no-same-owner -xf jbp_456d80899d5187c68de113852b37bde1201fd33a.tar.gz 00:00:13.604 [Pipeline] httpRequest 00:00:13.634 [Pipeline] echo 00:00:13.635 Sorcerer 10.211.164.101 is alive 00:00:13.643 [Pipeline] httpRequest 00:00:13.648 HttpMethod: GET 00:00:13.649 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:13.649 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:13.668 Response Code: HTTP/1.1 200 OK 00:00:13.669 Success: Status code 200 is in the accepted range: 200,404 00:00:13.669 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:08.082 [Pipeline] sh 00:01:08.371 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:01:10.920 [Pipeline] sh 00:01:11.240 + git -C spdk log --oneline -n5 00:01:11.240 dbef7efac test: fix dpdk builds on ubuntu24 00:01:11.240 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:11.240 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:11.240 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:11.240 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:11.253 [Pipeline] } 00:01:11.270 [Pipeline] // stage 00:01:11.279 [Pipeline] stage 00:01:11.281 [Pipeline] { (Prepare) 00:01:11.300 [Pipeline] writeFile 00:01:11.317 [Pipeline] sh 00:01:11.602 + logger -p user.info -t JENKINS-CI 00:01:11.614 [Pipeline] sh 00:01:11.898 + logger -p user.info -t JENKINS-CI 00:01:11.910 [Pipeline] sh 00:01:12.194 + cat autorun-spdk.conf 00:01:12.194 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.194 SPDK_TEST_NVMF=1 00:01:12.194 SPDK_TEST_NVME_CLI=1 00:01:12.194 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.194 SPDK_TEST_NVMF_NICS=e810 00:01:12.194 SPDK_RUN_UBSAN=1 00:01:12.194 NET_TYPE=phy 00:01:12.201 RUN_NIGHTLY=1 00:01:12.206 [Pipeline] readFile 00:01:12.232 [Pipeline] withEnv 00:01:12.235 [Pipeline] { 00:01:12.248 [Pipeline] sh 00:01:12.536 + set -ex 00:01:12.536 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:12.536 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:12.536 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.536 ++ SPDK_TEST_NVMF=1 00:01:12.536 ++ SPDK_TEST_NVME_CLI=1 00:01:12.536 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.536 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.536 ++ SPDK_RUN_UBSAN=1 00:01:12.536 ++ NET_TYPE=phy 00:01:12.536 ++ RUN_NIGHTLY=1 00:01:12.536 + case $SPDK_TEST_NVMF_NICS in 00:01:12.536 + DRIVERS=ice 00:01:12.536 + [[ tcp == \r\d\m\a ]] 00:01:12.536 + [[ -n ice ]] 00:01:12.536 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:12.536 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.536 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:12.536 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.536 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.536 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.536 + true 00:01:12.536 + for D in $DRIVERS 00:01:12.536 + sudo modprobe ice 00:01:12.536 + exit 0 00:01:12.563 [Pipeline] } 00:01:12.606 [Pipeline] // withEnv 00:01:12.609 [Pipeline] } 00:01:12.621 [Pipeline] // stage 00:01:12.627 [Pipeline] catchError 00:01:12.628 [Pipeline] { 00:01:12.636 [Pipeline] timeout 00:01:12.636 Timeout set to expire in 50 min 00:01:12.637 [Pipeline] { 00:01:12.647 [Pipeline] stage 00:01:12.649 [Pipeline] { (Tests) 00:01:12.658 [Pipeline] sh 00:01:12.938 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.938 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.938 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.938 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.938 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.938 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.938 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.938 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.938 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.938 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.938 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.938 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.938 + source /etc/os-release 00:01:12.938 ++ NAME='Fedora Linux' 00:01:12.938 ++ VERSION='38 (Cloud Edition)' 00:01:12.938 ++ ID=fedora 00:01:12.938 ++ VERSION_ID=38 00:01:12.938 ++ VERSION_CODENAME= 00:01:12.938 ++ PLATFORM_ID=platform:f38 00:01:12.938 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.938 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.938 ++ LOGO=fedora-logo-icon 00:01:12.938 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.938 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.938 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.938 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.938 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.938 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.938 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.938 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.938 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.938 ++ SUPPORT_END=2024-05-14 00:01:12.938 ++ VARIANT='Cloud Edition' 00:01:12.938 ++ VARIANT_ID=cloud 00:01:12.938 + uname -a 00:01:12.938 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.938 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:15.481 Hugepages 00:01:15.481 node hugesize free / total 00:01:15.481 node0 1048576kB 0 / 0 00:01:15.481 node0 2048kB 0 / 0 00:01:15.481 node1 1048576kB 0 / 0 00:01:15.481 node1 2048kB 0 / 0 00:01:15.481 00:01:15.481 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:15.481 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:15.481 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:15.481 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:15.481 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:15.481 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:15.481 + rm -f /tmp/spdk-ld-path 00:01:15.481 + source autorun-spdk.conf 00:01:15.481 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.481 ++ SPDK_TEST_NVMF=1 00:01:15.481 ++ SPDK_TEST_NVME_CLI=1 00:01:15.481 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.481 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.481 ++ SPDK_RUN_UBSAN=1 00:01:15.481 ++ NET_TYPE=phy 00:01:15.481 ++ RUN_NIGHTLY=1 00:01:15.481 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.481 + [[ -n '' ]] 00:01:15.481 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.481 + for M in /var/spdk/build-*-manifest.txt 00:01:15.481 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.481 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.481 + for M in /var/spdk/build-*-manifest.txt 00:01:15.481 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.481 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.481 ++ uname 00:01:15.481 + [[ Linux == \L\i\n\u\x ]] 00:01:15.481 + sudo dmesg -T 00:01:15.481 + sudo dmesg --clear 00:01:15.481 + dmesg_pid=2986637 00:01:15.481 + [[ Fedora Linux == FreeBSD ]] 00:01:15.481 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.481 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.481 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.481 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.481 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:15.481 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.481 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.481 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.481 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.481 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.481 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.481 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.481 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.481 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.481 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.481 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.481 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.481 + sudo dmesg -Tw 00:01:15.481 Test configuration: 00:01:15.481 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.481 SPDK_TEST_NVMF=1 00:01:15.481 SPDK_TEST_NVME_CLI=1 00:01:15.481 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.481 SPDK_TEST_NVMF_NICS=e810 00:01:15.481 SPDK_RUN_UBSAN=1 00:01:15.481 NET_TYPE=phy 00:01:15.481 RUN_NIGHTLY=1 13:43:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.481 13:43:06 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.481 13:43:06 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.481 13:43:06 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.481 13:43:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.481 13:43:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.481 13:43:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.481 13:43:06 -- paths/export.sh@5 -- $ export PATH 00:01:15.481 13:43:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.481 13:43:06 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.481 13:43:06 -- common/autobuild_common.sh@438 -- $ date +%s 00:01:15.481 13:43:06 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721734986.XXXXXX 00:01:15.481 13:43:06 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721734986.u3Dhga 00:01:15.481 13:43:06 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:01:15.481 13:43:06 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:01:15.481 13:43:06 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:15.481 13:43:06 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.481 13:43:06 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.481 13:43:06 -- common/autobuild_common.sh@454 -- $ get_config_params 00:01:15.481 13:43:06 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:15.481 13:43:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.481 13:43:06 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:15.481 13:43:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:15.481 13:43:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:15.481 13:43:06 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.481 13:43:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:15.481 Tue Jul 23 11:43:06 AM UTC 2024 00:01:15.481 13:43:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:15.481 LTS-60-gdbef7efac 00:01:15.481 13:43:06 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:15.481 13:43:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:15.481 13:43:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:15.481 13:43:06 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:15.481 13:43:06 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:15.481 13:43:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.481 ************************************ 00:01:15.481 START TEST ubsan 00:01:15.481 ************************************ 00:01:15.481 13:43:06 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:15.481 using ubsan 00:01:15.481 00:01:15.481 real 0m0.000s 00:01:15.481 user 0m0.000s 00:01:15.481 sys 0m0.000s 00:01:15.482 13:43:06 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:15.482 13:43:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.482 ************************************ 00:01:15.482 END TEST ubsan 00:01:15.482 ************************************ 00:01:15.482 13:43:06 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:15.482 13:43:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:15.482 13:43:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:15.482 13:43:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:15.482 13:43:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:15.482 13:43:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:15.482 13:43:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:15.482 13:43:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:15.482 13:43:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:15.742 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:15.742 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:16.002 Using 'verbs' RDMA provider 00:01:28.845 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:38.859 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:38.859 Creating mk/config.mk...done. 00:01:38.859 Creating mk/cc.flags.mk...done. 00:01:38.859 Type 'make' to build. 00:01:38.859 13:43:29 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:38.859 13:43:29 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:38.859 13:43:29 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:38.859 13:43:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:38.859 ************************************ 00:01:38.859 START TEST make 00:01:38.859 ************************************ 00:01:38.859 13:43:29 -- common/autotest_common.sh@1104 -- $ make -j96 00:01:39.118 make[1]: Nothing to be done for 'all'. 00:01:47.245 The Meson build system 00:01:47.245 Version: 1.3.1 00:01:47.245 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:47.245 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:47.245 Build type: native build 00:01:47.245 Program cat found: YES (/usr/bin/cat) 00:01:47.245 Project name: DPDK 00:01:47.245 Project version: 23.11.0 00:01:47.245 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.245 C linker for the host machine: cc ld.bfd 2.39-16 00:01:47.245 Host machine cpu family: x86_64 00:01:47.245 Host machine cpu: x86_64 00:01:47.245 Message: ## Building in Developer Mode ## 00:01:47.245 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.245 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:47.245 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.245 Program python3 found: YES (/usr/bin/python3) 00:01:47.245 Program cat found: YES (/usr/bin/cat) 00:01:47.245 Compiler for C supports arguments -march=native: YES 00:01:47.245 Checking for size of "void *" : 8 00:01:47.245 Checking for size of "void *" : 8 (cached) 00:01:47.245 Library m found: YES 00:01:47.245 Library numa found: YES 00:01:47.245 Has header "numaif.h" : YES 00:01:47.245 Library fdt found: NO 00:01:47.245 Library execinfo found: NO 00:01:47.245 Has header "execinfo.h" : YES 00:01:47.245 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.245 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.245 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.245 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.245 Run-time dependency openssl found: YES 3.0.9 00:01:47.245 Run-time dependency libpcap found: YES 1.10.4 00:01:47.245 Has header "pcap.h" with dependency libpcap: YES 00:01:47.245 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.245 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.245 Compiler for C supports arguments -Wformat: YES 00:01:47.245 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.245 Compiler for C supports arguments -Wformat-security: NO 00:01:47.245 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.245 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.245 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.245 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.245 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.245 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.245 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.245 Compiler for C supports arguments -Wundef: YES 00:01:47.245 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.245 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.245 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.245 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.245 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.245 Program objdump found: YES (/usr/bin/objdump) 00:01:47.245 Compiler for C supports arguments -mavx512f: YES 00:01:47.245 Checking if "AVX512 checking" compiles: YES 00:01:47.245 Fetching value of define "__SSE4_2__" : 1 00:01:47.245 Fetching value of define "__AES__" : 1 00:01:47.245 Fetching value of define "__AVX__" : 1 00:01:47.245 Fetching value of define "__AVX2__" : 1 00:01:47.245 Fetching value of define "__AVX512BW__" : 1 00:01:47.245 Fetching value of define "__AVX512CD__" : 1 00:01:47.245 Fetching value of define "__AVX512DQ__" : 1 00:01:47.245 Fetching value of define "__AVX512F__" : 1 00:01:47.245 Fetching value of define "__AVX512VL__" : 1 00:01:47.245 Fetching value of define "__PCLMUL__" : 1 00:01:47.245 Fetching value of define "__RDRND__" : 1 00:01:47.245 Fetching value of define "__RDSEED__" : 1 00:01:47.245 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.245 Fetching value of define "__znver1__" : (undefined) 00:01:47.245 Fetching value of define "__znver2__" : (undefined) 00:01:47.245 Fetching value of define "__znver3__" : (undefined) 00:01:47.245 Fetching value of define "__znver4__" : (undefined) 00:01:47.245 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.245 Message: lib/log: Defining dependency "log" 00:01:47.245 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.245 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.245 Checking for function "getentropy" : NO 00:01:47.245 Message: lib/eal: Defining dependency "eal" 00:01:47.245 Message: lib/ring: Defining dependency "ring" 00:01:47.245 Message: lib/rcu: Defining dependency "rcu" 00:01:47.245 Message: lib/mempool: Defining dependency "mempool" 00:01:47.245 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.245 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.245 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:47.245 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:47.245 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:47.245 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:47.245 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:47.245 Compiler for C supports arguments -mpclmul: YES 00:01:47.245 Compiler for C supports arguments -maes: YES 00:01:47.245 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.245 Compiler for C supports arguments -mavx512bw: YES 00:01:47.245 Compiler for C supports arguments -mavx512dq: YES 00:01:47.245 Compiler for C supports arguments -mavx512vl: YES 00:01:47.245 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.245 Compiler for C supports arguments -mavx2: YES 00:01:47.245 Compiler for C supports arguments -mavx: YES 00:01:47.245 Message: lib/net: Defining dependency "net" 00:01:47.245 Message: lib/meter: Defining dependency "meter" 00:01:47.245 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.245 Message: lib/pci: Defining dependency "pci" 00:01:47.245 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.245 Message: lib/hash: Defining dependency "hash" 00:01:47.245 Message: lib/timer: Defining dependency "timer" 00:01:47.245 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.245 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.245 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.245 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.245 Message: lib/power: Defining dependency "power" 00:01:47.245 Message: lib/reorder: Defining dependency "reorder" 00:01:47.245 Message: lib/security: Defining dependency "security" 00:01:47.245 Has header "linux/userfaultfd.h" : YES 00:01:47.245 Has header "linux/vduse.h" : YES 00:01:47.245 Message: lib/vhost: Defining dependency "vhost" 00:01:47.245 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:47.245 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:47.245 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:47.245 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:47.245 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:47.245 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:47.245 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:47.245 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:47.245 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:47.245 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:47.245 Program doxygen found: YES (/usr/bin/doxygen) 00:01:47.245 Configuring doxy-api-html.conf using configuration 00:01:47.245 Configuring doxy-api-man.conf using configuration 00:01:47.245 Program mandb found: YES (/usr/bin/mandb) 00:01:47.245 Program sphinx-build found: NO 00:01:47.245 Configuring rte_build_config.h using configuration 00:01:47.245 Message: 00:01:47.245 ================= 00:01:47.245 Applications Enabled 00:01:47.245 ================= 00:01:47.245 00:01:47.245 apps: 00:01:47.245 00:01:47.245 00:01:47.245 Message: 00:01:47.245 ================= 00:01:47.245 Libraries Enabled 00:01:47.245 ================= 00:01:47.245 00:01:47.245 libs: 00:01:47.245 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:47.245 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:47.245 cryptodev, dmadev, power, reorder, security, vhost, 00:01:47.245 00:01:47.245 Message: 00:01:47.245 =============== 00:01:47.245 Drivers Enabled 00:01:47.245 =============== 00:01:47.245 00:01:47.245 common: 00:01:47.245 00:01:47.245 bus: 00:01:47.245 pci, vdev, 00:01:47.245 mempool: 00:01:47.245 ring, 00:01:47.245 dma: 00:01:47.245 00:01:47.245 net: 00:01:47.245 00:01:47.245 crypto: 00:01:47.245 00:01:47.245 compress: 00:01:47.245 00:01:47.245 vdpa: 00:01:47.245 00:01:47.245 00:01:47.245 Message: 00:01:47.245 ================= 00:01:47.245 Content Skipped 00:01:47.245 ================= 00:01:47.245 00:01:47.245 apps: 00:01:47.245 dumpcap: explicitly disabled via build config 00:01:47.245 graph: explicitly disabled via build config 00:01:47.245 pdump: explicitly disabled via build config 00:01:47.245 proc-info: explicitly disabled via build config 00:01:47.245 test-acl: explicitly disabled via build config 00:01:47.245 test-bbdev: explicitly disabled via build config 00:01:47.245 test-cmdline: explicitly disabled via build config 00:01:47.245 test-compress-perf: explicitly disabled via build config 00:01:47.245 test-crypto-perf: explicitly disabled via build config 00:01:47.245 test-dma-perf: explicitly disabled via build config 00:01:47.245 test-eventdev: explicitly disabled via build config 00:01:47.245 test-fib: explicitly disabled via build config 00:01:47.245 test-flow-perf: explicitly disabled via build config 00:01:47.245 test-gpudev: explicitly disabled via build config 00:01:47.245 test-mldev: explicitly disabled via build config 00:01:47.245 test-pipeline: explicitly disabled via build config 00:01:47.245 test-pmd: explicitly disabled via build config 00:01:47.245 test-regex: explicitly disabled via build config 00:01:47.245 test-sad: explicitly disabled via build config 00:01:47.245 test-security-perf: explicitly disabled via build config 00:01:47.245 00:01:47.245 libs: 00:01:47.246 metrics: explicitly disabled via build config 00:01:47.246 acl: explicitly disabled via build config 00:01:47.246 bbdev: explicitly disabled via build config 00:01:47.246 bitratestats: explicitly disabled via build config 00:01:47.246 bpf: explicitly disabled via build config 00:01:47.246 cfgfile: explicitly disabled via build config 00:01:47.246 distributor: explicitly disabled via build config 00:01:47.246 efd: explicitly disabled via build config 00:01:47.246 eventdev: explicitly disabled via build config 00:01:47.246 dispatcher: explicitly disabled via build config 00:01:47.246 gpudev: explicitly disabled via build config 00:01:47.246 gro: explicitly disabled via build config 00:01:47.246 gso: explicitly disabled via build config 00:01:47.246 ip_frag: explicitly disabled via build config 00:01:47.246 jobstats: explicitly disabled via build config 00:01:47.246 latencystats: explicitly disabled via build config 00:01:47.246 lpm: explicitly disabled via build config 00:01:47.246 member: explicitly disabled via build config 00:01:47.246 pcapng: explicitly disabled via build config 00:01:47.246 rawdev: explicitly disabled via build config 00:01:47.246 regexdev: explicitly disabled via build config 00:01:47.246 mldev: explicitly disabled via build config 00:01:47.246 rib: explicitly disabled via build config 00:01:47.246 sched: explicitly disabled via build config 00:01:47.246 stack: explicitly disabled via build config 00:01:47.246 ipsec: explicitly disabled via build config 00:01:47.246 pdcp: explicitly disabled via build config 00:01:47.246 fib: explicitly disabled via build config 00:01:47.246 port: explicitly disabled via build config 00:01:47.246 pdump: explicitly disabled via build config 00:01:47.246 table: explicitly disabled via build config 00:01:47.246 pipeline: explicitly disabled via build config 00:01:47.246 graph: explicitly disabled via build config 00:01:47.246 node: explicitly disabled via build config 00:01:47.246 00:01:47.246 drivers: 00:01:47.246 common/cpt: not in enabled drivers build config 00:01:47.246 common/dpaax: not in enabled drivers build config 00:01:47.246 common/iavf: not in enabled drivers build config 00:01:47.246 common/idpf: not in enabled drivers build config 00:01:47.246 common/mvep: not in enabled drivers build config 00:01:47.246 common/octeontx: not in enabled drivers build config 00:01:47.246 bus/auxiliary: not in enabled drivers build config 00:01:47.246 bus/cdx: not in enabled drivers build config 00:01:47.246 bus/dpaa: not in enabled drivers build config 00:01:47.246 bus/fslmc: not in enabled drivers build config 00:01:47.246 bus/ifpga: not in enabled drivers build config 00:01:47.246 bus/platform: not in enabled drivers build config 00:01:47.246 bus/vmbus: not in enabled drivers build config 00:01:47.246 common/cnxk: not in enabled drivers build config 00:01:47.246 common/mlx5: not in enabled drivers build config 00:01:47.246 common/nfp: not in enabled drivers build config 00:01:47.246 common/qat: not in enabled drivers build config 00:01:47.246 common/sfc_efx: not in enabled drivers build config 00:01:47.246 mempool/bucket: not in enabled drivers build config 00:01:47.246 mempool/cnxk: not in enabled drivers build config 00:01:47.246 mempool/dpaa: not in enabled drivers build config 00:01:47.246 mempool/dpaa2: not in enabled drivers build config 00:01:47.246 mempool/octeontx: not in enabled drivers build config 00:01:47.246 mempool/stack: not in enabled drivers build config 00:01:47.246 dma/cnxk: not in enabled drivers build config 00:01:47.246 dma/dpaa: not in enabled drivers build config 00:01:47.246 dma/dpaa2: not in enabled drivers build config 00:01:47.246 dma/hisilicon: not in enabled drivers build config 00:01:47.246 dma/idxd: not in enabled drivers build config 00:01:47.246 dma/ioat: not in enabled drivers build config 00:01:47.246 dma/skeleton: not in enabled drivers build config 00:01:47.246 net/af_packet: not in enabled drivers build config 00:01:47.246 net/af_xdp: not in enabled drivers build config 00:01:47.246 net/ark: not in enabled drivers build config 00:01:47.246 net/atlantic: not in enabled drivers build config 00:01:47.246 net/avp: not in enabled drivers build config 00:01:47.246 net/axgbe: not in enabled drivers build config 00:01:47.246 net/bnx2x: not in enabled drivers build config 00:01:47.246 net/bnxt: not in enabled drivers build config 00:01:47.246 net/bonding: not in enabled drivers build config 00:01:47.246 net/cnxk: not in enabled drivers build config 00:01:47.246 net/cpfl: not in enabled drivers build config 00:01:47.246 net/cxgbe: not in enabled drivers build config 00:01:47.246 net/dpaa: not in enabled drivers build config 00:01:47.246 net/dpaa2: not in enabled drivers build config 00:01:47.246 net/e1000: not in enabled drivers build config 00:01:47.246 net/ena: not in enabled drivers build config 00:01:47.246 net/enetc: not in enabled drivers build config 00:01:47.246 net/enetfec: not in enabled drivers build config 00:01:47.246 net/enic: not in enabled drivers build config 00:01:47.246 net/failsafe: not in enabled drivers build config 00:01:47.246 net/fm10k: not in enabled drivers build config 00:01:47.246 net/gve: not in enabled drivers build config 00:01:47.246 net/hinic: not in enabled drivers build config 00:01:47.246 net/hns3: not in enabled drivers build config 00:01:47.246 net/i40e: not in enabled drivers build config 00:01:47.246 net/iavf: not in enabled drivers build config 00:01:47.246 net/ice: not in enabled drivers build config 00:01:47.246 net/idpf: not in enabled drivers build config 00:01:47.246 net/igc: not in enabled drivers build config 00:01:47.246 net/ionic: not in enabled drivers build config 00:01:47.246 net/ipn3ke: not in enabled drivers build config 00:01:47.246 net/ixgbe: not in enabled drivers build config 00:01:47.246 net/mana: not in enabled drivers build config 00:01:47.246 net/memif: not in enabled drivers build config 00:01:47.246 net/mlx4: not in enabled drivers build config 00:01:47.246 net/mlx5: not in enabled drivers build config 00:01:47.246 net/mvneta: not in enabled drivers build config 00:01:47.246 net/mvpp2: not in enabled drivers build config 00:01:47.246 net/netvsc: not in enabled drivers build config 00:01:47.246 net/nfb: not in enabled drivers build config 00:01:47.246 net/nfp: not in enabled drivers build config 00:01:47.246 net/ngbe: not in enabled drivers build config 00:01:47.246 net/null: not in enabled drivers build config 00:01:47.246 net/octeontx: not in enabled drivers build config 00:01:47.246 net/octeon_ep: not in enabled drivers build config 00:01:47.246 net/pcap: not in enabled drivers build config 00:01:47.246 net/pfe: not in enabled drivers build config 00:01:47.246 net/qede: not in enabled drivers build config 00:01:47.246 net/ring: not in enabled drivers build config 00:01:47.246 net/sfc: not in enabled drivers build config 00:01:47.246 net/softnic: not in enabled drivers build config 00:01:47.246 net/tap: not in enabled drivers build config 00:01:47.246 net/thunderx: not in enabled drivers build config 00:01:47.246 net/txgbe: not in enabled drivers build config 00:01:47.246 net/vdev_netvsc: not in enabled drivers build config 00:01:47.246 net/vhost: not in enabled drivers build config 00:01:47.246 net/virtio: not in enabled drivers build config 00:01:47.246 net/vmxnet3: not in enabled drivers build config 00:01:47.246 raw/*: missing internal dependency, "rawdev" 00:01:47.246 crypto/armv8: not in enabled drivers build config 00:01:47.246 crypto/bcmfs: not in enabled drivers build config 00:01:47.246 crypto/caam_jr: not in enabled drivers build config 00:01:47.246 crypto/ccp: not in enabled drivers build config 00:01:47.246 crypto/cnxk: not in enabled drivers build config 00:01:47.246 crypto/dpaa_sec: not in enabled drivers build config 00:01:47.246 crypto/dpaa2_sec: not in enabled drivers build config 00:01:47.246 crypto/ipsec_mb: not in enabled drivers build config 00:01:47.246 crypto/mlx5: not in enabled drivers build config 00:01:47.246 crypto/mvsam: not in enabled drivers build config 00:01:47.246 crypto/nitrox: not in enabled drivers build config 00:01:47.246 crypto/null: not in enabled drivers build config 00:01:47.246 crypto/octeontx: not in enabled drivers build config 00:01:47.246 crypto/openssl: not in enabled drivers build config 00:01:47.246 crypto/scheduler: not in enabled drivers build config 00:01:47.246 crypto/uadk: not in enabled drivers build config 00:01:47.246 crypto/virtio: not in enabled drivers build config 00:01:47.246 compress/isal: not in enabled drivers build config 00:01:47.246 compress/mlx5: not in enabled drivers build config 00:01:47.246 compress/octeontx: not in enabled drivers build config 00:01:47.246 compress/zlib: not in enabled drivers build config 00:01:47.246 regex/*: missing internal dependency, "regexdev" 00:01:47.246 ml/*: missing internal dependency, "mldev" 00:01:47.246 vdpa/ifc: not in enabled drivers build config 00:01:47.246 vdpa/mlx5: not in enabled drivers build config 00:01:47.246 vdpa/nfp: not in enabled drivers build config 00:01:47.246 vdpa/sfc: not in enabled drivers build config 00:01:47.246 event/*: missing internal dependency, "eventdev" 00:01:47.246 baseband/*: missing internal dependency, "bbdev" 00:01:47.246 gpu/*: missing internal dependency, "gpudev" 00:01:47.246 00:01:47.246 00:01:47.246 Build targets in project: 85 00:01:47.246 00:01:47.246 DPDK 23.11.0 00:01:47.246 00:01:47.246 User defined options 00:01:47.246 buildtype : debug 00:01:47.246 default_library : shared 00:01:47.246 libdir : lib 00:01:47.246 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:47.246 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:47.246 c_link_args : 00:01:47.246 cpu_instruction_set: native 00:01:47.246 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:47.246 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:47.246 enable_docs : false 00:01:47.246 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:47.246 enable_kmods : false 00:01:47.246 tests : false 00:01:47.246 00:01:47.246 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.246 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:47.509 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:47.510 [2/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:47.510 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:47.510 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:47.510 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:47.510 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:47.510 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:47.510 [8/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:47.510 [9/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:47.510 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:47.510 [11/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:47.510 [12/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:47.510 [13/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:47.510 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:47.510 [15/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:47.510 [16/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:47.510 [17/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:47.510 [18/265] Linking static target lib/librte_log.a 00:01:47.510 [19/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:47.510 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:47.510 [21/265] Linking static target lib/librte_kvargs.a 00:01:47.510 [22/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:47.510 [23/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:47.510 [24/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:47.770 [25/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:47.770 [26/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:47.770 [27/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:47.770 [28/265] Linking static target lib/librte_pci.a 00:01:47.770 [29/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:47.770 [30/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:47.770 [31/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:47.770 [32/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:47.770 [33/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:47.770 [34/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:47.770 [35/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:47.770 [36/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:47.770 [37/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:48.029 [38/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:48.029 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:48.029 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:48.029 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:48.029 [42/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:48.029 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:48.029 [44/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:48.029 [45/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:48.029 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:48.029 [47/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:48.029 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:48.029 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:48.029 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:48.029 [51/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:48.029 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:48.029 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:48.029 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:48.029 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:48.029 [56/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:48.029 [57/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:48.029 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:48.029 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:48.029 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:48.029 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:48.029 [62/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:48.029 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:48.029 [64/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:48.029 [65/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:48.029 [66/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:48.029 [67/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:48.029 [68/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:48.029 [69/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:48.029 [70/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:48.029 [71/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:48.029 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:48.029 [73/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:48.029 [74/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:48.029 [75/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:48.029 [76/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.029 [77/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:48.029 [78/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:48.029 [79/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:48.029 [80/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:48.029 [81/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:48.029 [82/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.029 [83/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:48.029 [84/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:48.029 [85/265] Linking static target lib/librte_meter.a 00:01:48.029 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:48.029 [87/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:48.029 [88/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:48.029 [89/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:48.029 [90/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:48.029 [91/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:48.029 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:48.029 [93/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:48.029 [94/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:48.029 [95/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:48.029 [96/265] Linking static target lib/librte_ring.a 00:01:48.029 [97/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:48.029 [98/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:48.029 [99/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:48.029 [100/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:48.030 [101/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:48.030 [102/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:48.030 [103/265] Linking static target lib/librte_telemetry.a 00:01:48.030 [104/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:48.030 [105/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:48.030 [106/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:48.030 [107/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:48.030 [108/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:48.030 [109/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:48.030 [110/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:48.030 [111/265] Linking static target lib/librte_mempool.a 00:01:48.030 [112/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:48.030 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:48.030 [114/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:48.030 [115/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:48.030 [116/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:48.288 [117/265] Linking static target lib/librte_cmdline.a 00:01:48.288 [118/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:48.288 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:48.288 [120/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:48.288 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:48.288 [122/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:48.288 [123/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:48.288 [124/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:48.288 [125/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:48.288 [126/265] Linking static target lib/librte_timer.a 00:01:48.288 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:48.288 [128/265] Linking static target lib/librte_net.a 00:01:48.288 [129/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:48.288 [130/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:48.288 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:48.288 [132/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:48.288 [133/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:48.288 [134/265] Linking static target lib/librte_rcu.a 00:01:48.288 [135/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:48.288 [136/265] Linking static target lib/librte_eal.a 00:01:48.288 [137/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:48.288 [138/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.288 [139/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:48.288 [140/265] Linking static target lib/librte_compressdev.a 00:01:48.288 [141/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:48.288 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:48.288 [143/265] Linking target lib/librte_log.so.24.0 00:01:48.289 [144/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.289 [145/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:48.289 [146/265] Linking static target lib/librte_mbuf.a 00:01:48.289 [147/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:48.289 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:48.289 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:48.289 [150/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:48.289 [151/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.289 [152/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:48.289 [153/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:48.289 [154/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:48.289 [155/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:48.289 [156/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:48.548 [157/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:48.548 [158/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:48.548 [159/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:48.548 [160/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:48.548 [161/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:48.548 [162/265] Linking static target lib/librte_dmadev.a 00:01:48.548 [163/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:48.548 [164/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:48.548 [165/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:48.548 [166/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:48.548 [167/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:48.548 [168/265] Linking target lib/librte_kvargs.so.24.0 00:01:48.548 [169/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.548 [170/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:48.548 [171/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:48.548 [172/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:48.548 [173/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.548 [174/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:48.548 [175/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:48.548 [176/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.548 [177/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:48.548 [178/265] Linking static target lib/librte_hash.a 00:01:48.548 [179/265] Linking static target lib/librte_power.a 00:01:48.548 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:48.548 [181/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:48.548 [182/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:48.548 [183/265] Linking target lib/librte_telemetry.so.24.0 00:01:48.548 [184/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:48.548 [185/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.548 [186/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.548 [187/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:48.548 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:48.548 [189/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:48.548 [190/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:48.548 [191/265] Linking static target drivers/librte_bus_vdev.a 00:01:48.548 [192/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:48.548 [193/265] Linking static target lib/librte_reorder.a 00:01:48.548 [194/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:48.548 [195/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:48.548 [196/265] Linking static target lib/librte_security.a 00:01:48.808 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:48.808 [198/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:48.808 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:48.808 [200/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:48.808 [201/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:48.808 [202/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.808 [203/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:48.808 [204/265] Linking static target drivers/librte_mempool_ring.a 00:01:48.808 [205/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:48.808 [206/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.808 [207/265] Linking static target lib/librte_cryptodev.a 00:01:48.808 [208/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.808 [209/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:48.808 [210/265] Linking static target drivers/librte_bus_pci.a 00:01:48.808 [211/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.808 [212/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.808 [213/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.808 [214/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.067 [215/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.067 [216/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.067 [217/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.067 [218/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.326 [219/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:49.326 [220/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:49.326 [221/265] Linking static target lib/librte_ethdev.a 00:01:49.326 [222/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.326 [223/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.585 [224/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.521 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:50.521 [226/265] Linking static target lib/librte_vhost.a 00:01:50.521 [227/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.899 [228/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.170 [229/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.107 [230/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.107 [231/265] Linking target lib/librte_eal.so.24.0 00:01:58.366 [232/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:58.366 [233/265] Linking target lib/librte_ring.so.24.0 00:01:58.366 [234/265] Linking target lib/librte_meter.so.24.0 00:01:58.366 [235/265] Linking target lib/librte_timer.so.24.0 00:01:58.366 [236/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:58.366 [237/265] Linking target lib/librte_pci.so.24.0 00:01:58.366 [238/265] Linking target lib/librte_dmadev.so.24.0 00:01:58.366 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:58.366 [240/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:58.366 [241/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:58.366 [242/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:58.366 [243/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:58.366 [244/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:58.366 [245/265] Linking target lib/librte_rcu.so.24.0 00:01:58.366 [246/265] Linking target lib/librte_mempool.so.24.0 00:01:58.624 [247/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:58.624 [248/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:58.624 [249/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:58.624 [250/265] Linking target lib/librte_mbuf.so.24.0 00:01:58.883 [251/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:58.883 [252/265] Linking target lib/librte_compressdev.so.24.0 00:01:58.883 [253/265] Linking target lib/librte_net.so.24.0 00:01:58.883 [254/265] Linking target lib/librte_reorder.so.24.0 00:01:58.883 [255/265] Linking target lib/librte_cryptodev.so.24.0 00:01:58.883 [256/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:58.883 [257/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:59.141 [258/265] Linking target lib/librte_security.so.24.0 00:01:59.142 [259/265] Linking target lib/librte_cmdline.so.24.0 00:01:59.142 [260/265] Linking target lib/librte_hash.so.24.0 00:01:59.142 [261/265] Linking target lib/librte_ethdev.so.24.0 00:01:59.142 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:59.142 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:59.142 [264/265] Linking target lib/librte_power.so.24.0 00:01:59.142 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:59.142 INFO: autodetecting backend as ninja 00:01:59.142 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:00.077 CC lib/ut_mock/mock.o 00:02:00.077 CC lib/log/log.o 00:02:00.077 CC lib/log/log_flags.o 00:02:00.077 CC lib/log/log_deprecated.o 00:02:00.077 CC lib/ut/ut.o 00:02:00.077 LIB libspdk_ut_mock.a 00:02:00.077 SO libspdk_ut_mock.so.5.0 00:02:00.077 LIB libspdk_log.a 00:02:00.336 LIB libspdk_ut.a 00:02:00.336 SO libspdk_log.so.6.1 00:02:00.336 SYMLINK libspdk_ut_mock.so 00:02:00.336 SO libspdk_ut.so.1.0 00:02:00.336 SYMLINK libspdk_log.so 00:02:00.336 SYMLINK libspdk_ut.so 00:02:00.594 CC lib/util/base64.o 00:02:00.594 CC lib/util/bit_array.o 00:02:00.594 CC lib/util/cpuset.o 00:02:00.594 CC lib/util/crc16.o 00:02:00.594 CC lib/util/crc32.o 00:02:00.594 CC lib/util/crc32c.o 00:02:00.594 CC lib/util/crc32_ieee.o 00:02:00.594 CC lib/util/crc64.o 00:02:00.594 CC lib/util/dif.o 00:02:00.594 CC lib/util/fd.o 00:02:00.594 CC lib/util/file.o 00:02:00.594 CC lib/util/math.o 00:02:00.594 CC lib/util/hexlify.o 00:02:00.594 CC lib/util/iov.o 00:02:00.594 CC lib/util/strerror_tls.o 00:02:00.594 CC lib/util/pipe.o 00:02:00.594 CC lib/util/string.o 00:02:00.594 CC lib/util/uuid.o 00:02:00.594 CC lib/util/fd_group.o 00:02:00.594 CC lib/util/xor.o 00:02:00.594 CC lib/util/zipf.o 00:02:00.594 CC lib/dma/dma.o 00:02:00.594 CXX lib/trace_parser/trace.o 00:02:00.594 CC lib/ioat/ioat.o 00:02:00.594 CC lib/vfio_user/host/vfio_user_pci.o 00:02:00.594 CC lib/vfio_user/host/vfio_user.o 00:02:00.594 LIB libspdk_dma.a 00:02:00.595 SO libspdk_dma.so.3.0 00:02:00.854 LIB libspdk_ioat.a 00:02:00.854 SYMLINK libspdk_dma.so 00:02:00.854 SO libspdk_ioat.so.6.0 00:02:00.854 LIB libspdk_vfio_user.a 00:02:00.854 SYMLINK libspdk_ioat.so 00:02:00.854 SO libspdk_vfio_user.so.4.0 00:02:00.854 LIB libspdk_util.a 00:02:00.854 SYMLINK libspdk_vfio_user.so 00:02:00.854 SO libspdk_util.so.8.0 00:02:01.112 SYMLINK libspdk_util.so 00:02:01.112 LIB libspdk_trace_parser.a 00:02:01.112 SO libspdk_trace_parser.so.4.0 00:02:01.112 CC lib/json/json_parse.o 00:02:01.112 CC lib/json/json_util.o 00:02:01.112 CC lib/json/json_write.o 00:02:01.112 CC lib/conf/conf.o 00:02:01.410 CC lib/rdma/common.o 00:02:01.410 CC lib/rdma/rdma_verbs.o 00:02:01.410 CC lib/vmd/vmd.o 00:02:01.410 CC lib/vmd/led.o 00:02:01.410 CC lib/idxd/idxd.o 00:02:01.410 CC lib/env_dpdk/memory.o 00:02:01.410 CC lib/env_dpdk/pci.o 00:02:01.410 CC lib/idxd/idxd_user.o 00:02:01.410 CC lib/env_dpdk/env.o 00:02:01.410 CC lib/idxd/idxd_kernel.o 00:02:01.410 CC lib/env_dpdk/init.o 00:02:01.410 CC lib/env_dpdk/threads.o 00:02:01.410 CC lib/env_dpdk/pci_ioat.o 00:02:01.410 CC lib/env_dpdk/pci_virtio.o 00:02:01.410 CC lib/env_dpdk/pci_vmd.o 00:02:01.410 CC lib/env_dpdk/pci_idxd.o 00:02:01.410 CC lib/env_dpdk/pci_event.o 00:02:01.410 CC lib/env_dpdk/sigbus_handler.o 00:02:01.410 CC lib/env_dpdk/pci_dpdk.o 00:02:01.410 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:01.410 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:01.410 SYMLINK libspdk_trace_parser.so 00:02:01.410 LIB libspdk_conf.a 00:02:01.410 SO libspdk_conf.so.5.0 00:02:01.410 LIB libspdk_json.a 00:02:01.410 LIB libspdk_rdma.a 00:02:01.410 SYMLINK libspdk_conf.so 00:02:01.410 SO libspdk_json.so.5.1 00:02:01.668 SO libspdk_rdma.so.5.0 00:02:01.668 SYMLINK libspdk_json.so 00:02:01.668 SYMLINK libspdk_rdma.so 00:02:01.668 LIB libspdk_idxd.a 00:02:01.668 SO libspdk_idxd.so.11.0 00:02:01.668 LIB libspdk_vmd.a 00:02:01.668 SO libspdk_vmd.so.5.0 00:02:01.668 CC lib/jsonrpc/jsonrpc_server.o 00:02:01.668 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:01.668 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:01.668 CC lib/jsonrpc/jsonrpc_client.o 00:02:01.668 SYMLINK libspdk_idxd.so 00:02:01.927 SYMLINK libspdk_vmd.so 00:02:01.927 LIB libspdk_jsonrpc.a 00:02:01.927 SO libspdk_jsonrpc.so.5.1 00:02:02.186 SYMLINK libspdk_jsonrpc.so 00:02:02.186 CC lib/rpc/rpc.o 00:02:02.186 LIB libspdk_env_dpdk.a 00:02:02.445 SO libspdk_env_dpdk.so.13.0 00:02:02.445 LIB libspdk_rpc.a 00:02:02.445 SYMLINK libspdk_env_dpdk.so 00:02:02.445 SO libspdk_rpc.so.5.0 00:02:02.445 SYMLINK libspdk_rpc.so 00:02:02.704 CC lib/notify/notify.o 00:02:02.704 CC lib/notify/notify_rpc.o 00:02:02.704 CC lib/trace/trace.o 00:02:02.704 CC lib/trace/trace_flags.o 00:02:02.704 CC lib/sock/sock.o 00:02:02.704 CC lib/trace/trace_rpc.o 00:02:02.704 CC lib/sock/sock_rpc.o 00:02:02.963 LIB libspdk_notify.a 00:02:02.963 SO libspdk_notify.so.5.0 00:02:02.963 LIB libspdk_trace.a 00:02:02.963 SO libspdk_trace.so.9.0 00:02:02.963 SYMLINK libspdk_notify.so 00:02:02.963 SYMLINK libspdk_trace.so 00:02:02.963 LIB libspdk_sock.a 00:02:02.963 SO libspdk_sock.so.8.0 00:02:03.221 SYMLINK libspdk_sock.so 00:02:03.221 CC lib/thread/thread.o 00:02:03.221 CC lib/thread/iobuf.o 00:02:03.221 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:03.221 CC lib/nvme/nvme_fabric.o 00:02:03.221 CC lib/nvme/nvme_ctrlr.o 00:02:03.221 CC lib/nvme/nvme_ns_cmd.o 00:02:03.221 CC lib/nvme/nvme_ns.o 00:02:03.221 CC lib/nvme/nvme_pcie_common.o 00:02:03.221 CC lib/nvme/nvme_pcie.o 00:02:03.221 CC lib/nvme/nvme_qpair.o 00:02:03.221 CC lib/nvme/nvme.o 00:02:03.221 CC lib/nvme/nvme_quirks.o 00:02:03.221 CC lib/nvme/nvme_transport.o 00:02:03.221 CC lib/nvme/nvme_discovery.o 00:02:03.221 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:03.221 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:03.221 CC lib/nvme/nvme_poll_group.o 00:02:03.221 CC lib/nvme/nvme_tcp.o 00:02:03.221 CC lib/nvme/nvme_opal.o 00:02:03.221 CC lib/nvme/nvme_io_msg.o 00:02:03.221 CC lib/nvme/nvme_vfio_user.o 00:02:03.221 CC lib/nvme/nvme_zns.o 00:02:03.221 CC lib/nvme/nvme_cuse.o 00:02:03.221 CC lib/nvme/nvme_rdma.o 00:02:04.598 LIB libspdk_thread.a 00:02:04.598 SO libspdk_thread.so.9.0 00:02:04.598 SYMLINK libspdk_thread.so 00:02:04.598 CC lib/accel/accel.o 00:02:04.598 CC lib/accel/accel_rpc.o 00:02:04.598 CC lib/accel/accel_sw.o 00:02:04.598 CC lib/init/json_config.o 00:02:04.598 CC lib/init/subsystem.o 00:02:04.598 CC lib/init/subsystem_rpc.o 00:02:04.598 CC lib/init/rpc.o 00:02:04.598 CC lib/virtio/virtio.o 00:02:04.598 CC lib/virtio/virtio_vhost_user.o 00:02:04.598 CC lib/blob/blobstore.o 00:02:04.598 CC lib/virtio/virtio_vfio_user.o 00:02:04.598 CC lib/blob/request.o 00:02:04.598 CC lib/virtio/virtio_pci.o 00:02:04.598 CC lib/blob/zeroes.o 00:02:04.598 CC lib/blob/blob_bs_dev.o 00:02:04.856 LIB libspdk_init.a 00:02:04.856 LIB libspdk_nvme.a 00:02:04.856 SO libspdk_init.so.4.0 00:02:04.856 LIB libspdk_virtio.a 00:02:04.856 SYMLINK libspdk_init.so 00:02:04.856 SO libspdk_nvme.so.12.0 00:02:04.856 SO libspdk_virtio.so.6.0 00:02:04.856 SYMLINK libspdk_virtio.so 00:02:05.115 CC lib/event/app.o 00:02:05.115 CC lib/event/reactor.o 00:02:05.115 CC lib/event/log_rpc.o 00:02:05.115 CC lib/event/app_rpc.o 00:02:05.115 CC lib/event/scheduler_static.o 00:02:05.115 SYMLINK libspdk_nvme.so 00:02:05.374 LIB libspdk_accel.a 00:02:05.374 SO libspdk_accel.so.14.0 00:02:05.374 LIB libspdk_event.a 00:02:05.374 SO libspdk_event.so.12.0 00:02:05.374 SYMLINK libspdk_accel.so 00:02:05.374 SYMLINK libspdk_event.so 00:02:05.633 CC lib/bdev/bdev.o 00:02:05.633 CC lib/bdev/bdev_rpc.o 00:02:05.633 CC lib/bdev/bdev_zone.o 00:02:05.633 CC lib/bdev/part.o 00:02:05.634 CC lib/bdev/scsi_nvme.o 00:02:06.573 LIB libspdk_blob.a 00:02:06.573 SO libspdk_blob.so.10.1 00:02:06.573 SYMLINK libspdk_blob.so 00:02:06.833 CC lib/lvol/lvol.o 00:02:06.833 CC lib/blobfs/blobfs.o 00:02:06.833 CC lib/blobfs/tree.o 00:02:07.401 LIB libspdk_bdev.a 00:02:07.401 SO libspdk_bdev.so.14.0 00:02:07.401 LIB libspdk_blobfs.a 00:02:07.401 LIB libspdk_lvol.a 00:02:07.401 SO libspdk_blobfs.so.9.0 00:02:07.401 SO libspdk_lvol.so.9.1 00:02:07.401 SYMLINK libspdk_bdev.so 00:02:07.401 SYMLINK libspdk_blobfs.so 00:02:07.401 SYMLINK libspdk_lvol.so 00:02:07.660 CC lib/nvmf/ctrlr_discovery.o 00:02:07.660 CC lib/nvmf/ctrlr.o 00:02:07.660 CC lib/nvmf/ctrlr_bdev.o 00:02:07.660 CC lib/nvmf/subsystem.o 00:02:07.660 CC lib/nvmf/nvmf.o 00:02:07.660 CC lib/nvmf/nvmf_rpc.o 00:02:07.660 CC lib/nvmf/transport.o 00:02:07.660 CC lib/nvmf/tcp.o 00:02:07.660 CC lib/nvmf/rdma.o 00:02:07.660 CC lib/ublk/ublk.o 00:02:07.660 CC lib/ublk/ublk_rpc.o 00:02:07.660 CC lib/ftl/ftl_core.o 00:02:07.660 CC lib/ftl/ftl_init.o 00:02:07.660 CC lib/scsi/dev.o 00:02:07.660 CC lib/ftl/ftl_layout.o 00:02:07.660 CC lib/ftl/ftl_sb.o 00:02:07.660 CC lib/ftl/ftl_debug.o 00:02:07.660 CC lib/scsi/lun.o 00:02:07.660 CC lib/ftl/ftl_l2p.o 00:02:07.660 CC lib/ftl/ftl_io.o 00:02:07.660 CC lib/scsi/port.o 00:02:07.660 CC lib/scsi/scsi.o 00:02:07.660 CC lib/scsi/scsi_bdev.o 00:02:07.660 CC lib/scsi/scsi_rpc.o 00:02:07.660 CC lib/ftl/ftl_l2p_flat.o 00:02:07.660 CC lib/scsi/scsi_pr.o 00:02:07.660 CC lib/ftl/ftl_nv_cache.o 00:02:07.660 CC lib/ftl/ftl_band.o 00:02:07.660 CC lib/scsi/task.o 00:02:07.660 CC lib/nbd/nbd.o 00:02:07.660 CC lib/ftl/ftl_band_ops.o 00:02:07.660 CC lib/nbd/nbd_rpc.o 00:02:07.660 CC lib/ftl/ftl_writer.o 00:02:07.660 CC lib/ftl/ftl_reloc.o 00:02:07.660 CC lib/ftl/ftl_rq.o 00:02:07.660 CC lib/ftl/ftl_l2p_cache.o 00:02:07.660 CC lib/ftl/ftl_p2l.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:07.660 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:07.660 CC lib/ftl/utils/ftl_conf.o 00:02:07.660 CC lib/ftl/utils/ftl_md.o 00:02:07.660 CC lib/ftl/utils/ftl_bitmap.o 00:02:07.660 CC lib/ftl/utils/ftl_mempool.o 00:02:07.660 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:07.660 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:07.660 CC lib/ftl/utils/ftl_property.o 00:02:07.660 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:07.660 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:07.660 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:07.660 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:07.660 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:07.660 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:07.660 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:07.660 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:07.660 CC lib/ftl/base/ftl_base_dev.o 00:02:07.660 CC lib/ftl/ftl_trace.o 00:02:07.660 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.227 LIB libspdk_scsi.a 00:02:08.227 LIB libspdk_nbd.a 00:02:08.227 SO libspdk_scsi.so.8.0 00:02:08.227 SO libspdk_nbd.so.6.0 00:02:08.227 SYMLINK libspdk_nbd.so 00:02:08.227 SYMLINK libspdk_scsi.so 00:02:08.227 LIB libspdk_ublk.a 00:02:08.227 SO libspdk_ublk.so.2.0 00:02:08.227 SYMLINK libspdk_ublk.so 00:02:08.486 CC lib/vhost/vhost.o 00:02:08.486 CC lib/vhost/vhost_rpc.o 00:02:08.486 CC lib/vhost/vhost_scsi.o 00:02:08.486 CC lib/vhost/vhost_blk.o 00:02:08.486 CC lib/vhost/rte_vhost_user.o 00:02:08.486 CC lib/iscsi/init_grp.o 00:02:08.486 CC lib/iscsi/conn.o 00:02:08.486 CC lib/iscsi/iscsi.o 00:02:08.486 CC lib/iscsi/param.o 00:02:08.486 CC lib/iscsi/md5.o 00:02:08.486 CC lib/iscsi/portal_grp.o 00:02:08.486 CC lib/iscsi/tgt_node.o 00:02:08.486 CC lib/iscsi/iscsi_subsystem.o 00:02:08.486 CC lib/iscsi/iscsi_rpc.o 00:02:08.486 CC lib/iscsi/task.o 00:02:08.486 LIB libspdk_ftl.a 00:02:08.486 SO libspdk_ftl.so.8.0 00:02:08.744 SYMLINK libspdk_ftl.so 00:02:09.313 LIB libspdk_nvmf.a 00:02:09.313 LIB libspdk_vhost.a 00:02:09.313 SO libspdk_nvmf.so.17.0 00:02:09.313 SO libspdk_vhost.so.7.1 00:02:09.313 SYMLINK libspdk_vhost.so 00:02:09.313 SYMLINK libspdk_nvmf.so 00:02:09.313 LIB libspdk_iscsi.a 00:02:09.313 SO libspdk_iscsi.so.7.0 00:02:09.573 SYMLINK libspdk_iscsi.so 00:02:09.832 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.091 CC module/sock/posix/posix.o 00:02:10.091 CC module/blob/bdev/blob_bdev.o 00:02:10.091 CC module/accel/ioat/accel_ioat.o 00:02:10.091 CC module/accel/ioat/accel_ioat_rpc.o 00:02:10.091 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.091 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.091 CC module/accel/dsa/accel_dsa.o 00:02:10.091 CC module/accel/iaa/accel_iaa.o 00:02:10.091 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.091 CC module/accel/error/accel_error.o 00:02:10.091 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.091 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:10.091 CC module/accel/error/accel_error_rpc.o 00:02:10.091 LIB libspdk_env_dpdk_rpc.a 00:02:10.091 SO libspdk_env_dpdk_rpc.so.5.0 00:02:10.091 SYMLINK libspdk_env_dpdk_rpc.so 00:02:10.091 LIB libspdk_scheduler_gscheduler.a 00:02:10.091 LIB libspdk_accel_ioat.a 00:02:10.091 LIB libspdk_scheduler_dpdk_governor.a 00:02:10.091 SO libspdk_scheduler_gscheduler.so.3.0 00:02:10.091 LIB libspdk_accel_dsa.a 00:02:10.091 LIB libspdk_accel_iaa.a 00:02:10.091 LIB libspdk_accel_error.a 00:02:10.091 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:10.091 LIB libspdk_scheduler_dynamic.a 00:02:10.091 SO libspdk_accel_ioat.so.5.0 00:02:10.091 LIB libspdk_blob_bdev.a 00:02:10.091 SO libspdk_accel_iaa.so.2.0 00:02:10.091 SYMLINK libspdk_scheduler_gscheduler.so 00:02:10.091 SO libspdk_accel_dsa.so.4.0 00:02:10.091 SO libspdk_accel_error.so.1.0 00:02:10.091 SO libspdk_scheduler_dynamic.so.3.0 00:02:10.349 SO libspdk_blob_bdev.so.10.1 00:02:10.349 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:10.349 SYMLINK libspdk_accel_ioat.so 00:02:10.349 SYMLINK libspdk_accel_iaa.so 00:02:10.349 SYMLINK libspdk_scheduler_dynamic.so 00:02:10.349 SYMLINK libspdk_accel_error.so 00:02:10.349 SYMLINK libspdk_accel_dsa.so 00:02:10.349 SYMLINK libspdk_blob_bdev.so 00:02:10.607 LIB libspdk_sock_posix.a 00:02:10.607 SO libspdk_sock_posix.so.5.0 00:02:10.607 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:10.607 CC module/bdev/lvol/vbdev_lvol.o 00:02:10.607 CC module/bdev/malloc/bdev_malloc.o 00:02:10.607 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:10.607 CC module/bdev/error/vbdev_error_rpc.o 00:02:10.607 CC module/bdev/error/vbdev_error.o 00:02:10.607 CC module/bdev/ftl/bdev_ftl.o 00:02:10.607 CC module/bdev/gpt/gpt.o 00:02:10.607 CC module/bdev/gpt/vbdev_gpt.o 00:02:10.607 CC module/bdev/iscsi/bdev_iscsi.o 00:02:10.607 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:10.607 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:10.607 CC module/bdev/passthru/vbdev_passthru.o 00:02:10.607 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:10.607 CC module/bdev/null/bdev_null.o 00:02:10.607 CC module/bdev/null/bdev_null_rpc.o 00:02:10.607 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:10.607 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:10.607 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:10.607 CC module/bdev/delay/vbdev_delay.o 00:02:10.607 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:10.607 CC module/bdev/raid/bdev_raid.o 00:02:10.607 CC module/blobfs/bdev/blobfs_bdev.o 00:02:10.607 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:10.607 CC module/bdev/nvme/bdev_nvme.o 00:02:10.607 CC module/bdev/raid/bdev_raid_rpc.o 00:02:10.607 CC module/bdev/aio/bdev_aio_rpc.o 00:02:10.607 CC module/bdev/aio/bdev_aio.o 00:02:10.607 CC module/bdev/raid/raid1.o 00:02:10.607 CC module/bdev/raid/bdev_raid_sb.o 00:02:10.607 CC module/bdev/raid/raid0.o 00:02:10.607 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:10.607 CC module/bdev/nvme/nvme_rpc.o 00:02:10.607 CC module/bdev/raid/concat.o 00:02:10.607 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:10.607 CC module/bdev/split/vbdev_split.o 00:02:10.607 CC module/bdev/nvme/bdev_mdns_client.o 00:02:10.607 CC module/bdev/split/vbdev_split_rpc.o 00:02:10.607 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:10.607 CC module/bdev/nvme/vbdev_opal.o 00:02:10.607 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:10.607 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:10.607 SYMLINK libspdk_sock_posix.so 00:02:10.866 LIB libspdk_blobfs_bdev.a 00:02:10.866 SO libspdk_blobfs_bdev.so.5.0 00:02:10.866 LIB libspdk_bdev_null.a 00:02:10.866 LIB libspdk_bdev_ftl.a 00:02:10.866 LIB libspdk_bdev_split.a 00:02:10.866 LIB libspdk_bdev_gpt.a 00:02:10.866 LIB libspdk_bdev_error.a 00:02:10.866 SO libspdk_bdev_null.so.5.0 00:02:10.866 LIB libspdk_bdev_passthru.a 00:02:10.866 SO libspdk_bdev_ftl.so.5.0 00:02:10.866 SYMLINK libspdk_blobfs_bdev.so 00:02:10.866 SO libspdk_bdev_split.so.5.0 00:02:10.866 SO libspdk_bdev_gpt.so.5.0 00:02:10.866 SO libspdk_bdev_passthru.so.5.0 00:02:10.866 SO libspdk_bdev_error.so.5.0 00:02:10.866 SYMLINK libspdk_bdev_null.so 00:02:10.866 LIB libspdk_bdev_delay.a 00:02:10.866 LIB libspdk_bdev_aio.a 00:02:10.866 LIB libspdk_bdev_malloc.a 00:02:10.866 SYMLINK libspdk_bdev_split.so 00:02:10.866 SYMLINK libspdk_bdev_ftl.so 00:02:10.866 LIB libspdk_bdev_zone_block.a 00:02:10.866 SYMLINK libspdk_bdev_gpt.so 00:02:10.866 SO libspdk_bdev_delay.so.5.0 00:02:10.866 SYMLINK libspdk_bdev_passthru.so 00:02:10.866 LIB libspdk_bdev_iscsi.a 00:02:10.866 SO libspdk_bdev_aio.so.5.0 00:02:10.866 SO libspdk_bdev_malloc.so.5.0 00:02:10.866 SYMLINK libspdk_bdev_error.so 00:02:10.866 SO libspdk_bdev_zone_block.so.5.0 00:02:10.866 LIB libspdk_bdev_lvol.a 00:02:10.866 SO libspdk_bdev_iscsi.so.5.0 00:02:11.125 SYMLINK libspdk_bdev_delay.so 00:02:11.125 SO libspdk_bdev_lvol.so.5.0 00:02:11.125 SYMLINK libspdk_bdev_aio.so 00:02:11.125 SYMLINK libspdk_bdev_malloc.so 00:02:11.125 SYMLINK libspdk_bdev_zone_block.so 00:02:11.125 SYMLINK libspdk_bdev_iscsi.so 00:02:11.125 LIB libspdk_bdev_virtio.a 00:02:11.125 SYMLINK libspdk_bdev_lvol.so 00:02:11.125 SO libspdk_bdev_virtio.so.5.0 00:02:11.125 SYMLINK libspdk_bdev_virtio.so 00:02:11.383 LIB libspdk_bdev_raid.a 00:02:11.383 SO libspdk_bdev_raid.so.5.0 00:02:11.383 SYMLINK libspdk_bdev_raid.so 00:02:12.321 LIB libspdk_bdev_nvme.a 00:02:12.321 SO libspdk_bdev_nvme.so.6.0 00:02:12.321 SYMLINK libspdk_bdev_nvme.so 00:02:12.580 CC module/event/subsystems/vmd/vmd.o 00:02:12.580 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:12.580 CC module/event/subsystems/scheduler/scheduler.o 00:02:12.580 CC module/event/subsystems/iobuf/iobuf.o 00:02:12.580 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:12.580 CC module/event/subsystems/sock/sock.o 00:02:12.580 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:12.839 LIB libspdk_event_vmd.a 00:02:12.839 LIB libspdk_event_scheduler.a 00:02:12.839 SO libspdk_event_vmd.so.5.0 00:02:12.839 LIB libspdk_event_iobuf.a 00:02:12.839 LIB libspdk_event_sock.a 00:02:12.839 SO libspdk_event_scheduler.so.3.0 00:02:12.839 LIB libspdk_event_vhost_blk.a 00:02:12.839 SO libspdk_event_iobuf.so.2.0 00:02:12.839 SO libspdk_event_sock.so.4.0 00:02:12.839 SO libspdk_event_vhost_blk.so.2.0 00:02:12.839 SYMLINK libspdk_event_vmd.so 00:02:12.839 SYMLINK libspdk_event_scheduler.so 00:02:12.839 SYMLINK libspdk_event_iobuf.so 00:02:12.839 SYMLINK libspdk_event_sock.so 00:02:12.839 SYMLINK libspdk_event_vhost_blk.so 00:02:13.099 CC module/event/subsystems/accel/accel.o 00:02:13.359 LIB libspdk_event_accel.a 00:02:13.359 SO libspdk_event_accel.so.5.0 00:02:13.359 SYMLINK libspdk_event_accel.so 00:02:13.619 CC module/event/subsystems/bdev/bdev.o 00:02:13.619 LIB libspdk_event_bdev.a 00:02:13.619 SO libspdk_event_bdev.so.5.0 00:02:13.619 SYMLINK libspdk_event_bdev.so 00:02:13.878 CC module/event/subsystems/ublk/ublk.o 00:02:13.878 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:13.878 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:13.878 CC module/event/subsystems/scsi/scsi.o 00:02:13.878 CC module/event/subsystems/nbd/nbd.o 00:02:14.137 LIB libspdk_event_ublk.a 00:02:14.137 LIB libspdk_event_scsi.a 00:02:14.137 LIB libspdk_event_nbd.a 00:02:14.137 SO libspdk_event_ublk.so.2.0 00:02:14.137 SO libspdk_event_scsi.so.5.0 00:02:14.137 LIB libspdk_event_nvmf.a 00:02:14.137 SO libspdk_event_nbd.so.5.0 00:02:14.137 SO libspdk_event_nvmf.so.5.0 00:02:14.138 SYMLINK libspdk_event_scsi.so 00:02:14.138 SYMLINK libspdk_event_ublk.so 00:02:14.138 SYMLINK libspdk_event_nbd.so 00:02:14.138 SYMLINK libspdk_event_nvmf.so 00:02:14.397 CC module/event/subsystems/iscsi/iscsi.o 00:02:14.397 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:14.397 LIB libspdk_event_vhost_scsi.a 00:02:14.397 LIB libspdk_event_iscsi.a 00:02:14.397 SO libspdk_event_vhost_scsi.so.2.0 00:02:14.656 SO libspdk_event_iscsi.so.5.0 00:02:14.656 SYMLINK libspdk_event_vhost_scsi.so 00:02:14.656 SYMLINK libspdk_event_iscsi.so 00:02:14.656 SO libspdk.so.5.0 00:02:14.656 SYMLINK libspdk.so 00:02:14.915 CC app/spdk_nvme_discover/discovery_aer.o 00:02:14.915 CXX app/trace/trace.o 00:02:14.915 CC app/spdk_lspci/spdk_lspci.o 00:02:14.915 CC app/spdk_nvme_identify/identify.o 00:02:14.915 CC app/spdk_top/spdk_top.o 00:02:14.915 CC app/trace_record/trace_record.o 00:02:14.915 CC app/spdk_nvme_perf/perf.o 00:02:14.915 CC test/rpc_client/rpc_client_test.o 00:02:14.915 TEST_HEADER include/spdk/accel.h 00:02:14.915 TEST_HEADER include/spdk/accel_module.h 00:02:14.915 TEST_HEADER include/spdk/barrier.h 00:02:14.915 TEST_HEADER include/spdk/bdev.h 00:02:14.915 TEST_HEADER include/spdk/assert.h 00:02:14.915 TEST_HEADER include/spdk/bdev_module.h 00:02:14.915 TEST_HEADER include/spdk/base64.h 00:02:14.915 TEST_HEADER include/spdk/bdev_zone.h 00:02:14.915 TEST_HEADER include/spdk/bit_array.h 00:02:14.915 TEST_HEADER include/spdk/bit_pool.h 00:02:14.915 TEST_HEADER include/spdk/blob_bdev.h 00:02:14.915 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:14.915 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:14.915 CC app/spdk_dd/spdk_dd.o 00:02:14.915 TEST_HEADER include/spdk/blob.h 00:02:14.915 TEST_HEADER include/spdk/blobfs.h 00:02:14.915 TEST_HEADER include/spdk/conf.h 00:02:14.915 TEST_HEADER include/spdk/config.h 00:02:14.915 TEST_HEADER include/spdk/cpuset.h 00:02:14.915 TEST_HEADER include/spdk/crc32.h 00:02:14.915 TEST_HEADER include/spdk/crc16.h 00:02:14.915 TEST_HEADER include/spdk/dif.h 00:02:14.915 TEST_HEADER include/spdk/crc64.h 00:02:14.915 TEST_HEADER include/spdk/dma.h 00:02:14.915 TEST_HEADER include/spdk/endian.h 00:02:14.915 TEST_HEADER include/spdk/env.h 00:02:14.915 CC app/vhost/vhost.o 00:02:14.915 TEST_HEADER include/spdk/fd_group.h 00:02:14.915 TEST_HEADER include/spdk/env_dpdk.h 00:02:14.915 TEST_HEADER include/spdk/fd.h 00:02:14.915 TEST_HEADER include/spdk/file.h 00:02:14.915 TEST_HEADER include/spdk/event.h 00:02:14.915 TEST_HEADER include/spdk/gpt_spec.h 00:02:14.915 TEST_HEADER include/spdk/ftl.h 00:02:14.915 TEST_HEADER include/spdk/hexlify.h 00:02:14.915 TEST_HEADER include/spdk/idxd.h 00:02:14.915 CC app/iscsi_tgt/iscsi_tgt.o 00:02:14.915 TEST_HEADER include/spdk/histogram_data.h 00:02:14.915 CC app/nvmf_tgt/nvmf_main.o 00:02:14.915 TEST_HEADER include/spdk/idxd_spec.h 00:02:14.915 TEST_HEADER include/spdk/ioat_spec.h 00:02:14.915 TEST_HEADER include/spdk/init.h 00:02:14.915 TEST_HEADER include/spdk/iscsi_spec.h 00:02:14.915 TEST_HEADER include/spdk/ioat.h 00:02:14.915 TEST_HEADER include/spdk/json.h 00:02:14.915 TEST_HEADER include/spdk/log.h 00:02:14.915 TEST_HEADER include/spdk/jsonrpc.h 00:02:14.915 TEST_HEADER include/spdk/likely.h 00:02:14.915 TEST_HEADER include/spdk/lvol.h 00:02:14.915 TEST_HEADER include/spdk/memory.h 00:02:14.915 TEST_HEADER include/spdk/mmio.h 00:02:14.915 TEST_HEADER include/spdk/nbd.h 00:02:14.915 TEST_HEADER include/spdk/notify.h 00:02:14.915 TEST_HEADER include/spdk/nvme.h 00:02:14.915 TEST_HEADER include/spdk/nvme_intel.h 00:02:14.915 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:14.915 TEST_HEADER include/spdk/nvme_spec.h 00:02:14.915 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:14.915 CC app/spdk_tgt/spdk_tgt.o 00:02:14.915 TEST_HEADER include/spdk/nvme_zns.h 00:02:14.915 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:14.915 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:14.915 TEST_HEADER include/spdk/nvmf.h 00:02:14.915 TEST_HEADER include/spdk/nvmf_spec.h 00:02:14.916 TEST_HEADER include/spdk/nvmf_transport.h 00:02:14.916 TEST_HEADER include/spdk/opal_spec.h 00:02:14.916 TEST_HEADER include/spdk/opal.h 00:02:14.916 TEST_HEADER include/spdk/queue.h 00:02:14.916 TEST_HEADER include/spdk/pci_ids.h 00:02:14.916 TEST_HEADER include/spdk/pipe.h 00:02:14.916 TEST_HEADER include/spdk/rpc.h 00:02:14.916 TEST_HEADER include/spdk/scheduler.h 00:02:14.916 TEST_HEADER include/spdk/reduce.h 00:02:14.916 TEST_HEADER include/spdk/scsi.h 00:02:14.916 TEST_HEADER include/spdk/sock.h 00:02:14.916 TEST_HEADER include/spdk/scsi_spec.h 00:02:14.916 TEST_HEADER include/spdk/stdinc.h 00:02:14.916 TEST_HEADER include/spdk/string.h 00:02:14.916 TEST_HEADER include/spdk/thread.h 00:02:14.916 CC examples/nvme/hello_world/hello_world.o 00:02:14.916 TEST_HEADER include/spdk/trace.h 00:02:14.916 TEST_HEADER include/spdk/trace_parser.h 00:02:14.916 TEST_HEADER include/spdk/tree.h 00:02:14.916 TEST_HEADER include/spdk/ublk.h 00:02:14.916 TEST_HEADER include/spdk/util.h 00:02:14.916 TEST_HEADER include/spdk/uuid.h 00:02:14.916 TEST_HEADER include/spdk/version.h 00:02:14.916 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:14.916 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:14.916 TEST_HEADER include/spdk/vhost.h 00:02:14.916 CC examples/nvme/reconnect/reconnect.o 00:02:14.916 TEST_HEADER include/spdk/vmd.h 00:02:14.916 CC examples/nvme/abort/abort.o 00:02:14.916 TEST_HEADER include/spdk/zipf.h 00:02:14.916 TEST_HEADER include/spdk/xor.h 00:02:14.916 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:14.916 CXX test/cpp_headers/accel_module.o 00:02:14.916 CXX test/cpp_headers/assert.o 00:02:14.916 CXX test/cpp_headers/accel.o 00:02:14.916 CC test/event/reactor_perf/reactor_perf.o 00:02:14.916 CXX test/cpp_headers/barrier.o 00:02:14.916 CXX test/cpp_headers/base64.o 00:02:14.916 CXX test/cpp_headers/bdev.o 00:02:14.916 CC examples/vmd/lsvmd/lsvmd.o 00:02:14.916 CC examples/idxd/perf/perf.o 00:02:14.916 CXX test/cpp_headers/bdev_module.o 00:02:14.916 CC examples/util/zipf/zipf.o 00:02:14.916 CXX test/cpp_headers/bit_array.o 00:02:14.916 CXX test/cpp_headers/bdev_zone.o 00:02:15.177 CXX test/cpp_headers/blob_bdev.o 00:02:15.177 CXX test/cpp_headers/bit_pool.o 00:02:15.177 CXX test/cpp_headers/blobfs_bdev.o 00:02:15.178 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:15.178 CXX test/cpp_headers/blobfs.o 00:02:15.178 CC examples/ioat/perf/perf.o 00:02:15.178 CC test/event/app_repeat/app_repeat.o 00:02:15.178 CXX test/cpp_headers/config.o 00:02:15.178 CXX test/cpp_headers/conf.o 00:02:15.178 CXX test/cpp_headers/blob.o 00:02:15.178 CC examples/nvme/hotplug/hotplug.o 00:02:15.178 CXX test/cpp_headers/cpuset.o 00:02:15.178 CC test/event/event_perf/event_perf.o 00:02:15.178 CC examples/sock/hello_world/hello_sock.o 00:02:15.178 CC examples/vmd/led/led.o 00:02:15.178 CC examples/ioat/verify/verify.o 00:02:15.178 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:15.178 CC examples/accel/perf/accel_perf.o 00:02:15.178 CC examples/nvme/arbitration/arbitration.o 00:02:15.178 CXX test/cpp_headers/crc16.o 00:02:15.178 CXX test/cpp_headers/crc32.o 00:02:15.178 CC test/nvme/reset/reset.o 00:02:15.178 CXX test/cpp_headers/crc64.o 00:02:15.178 CC app/fio/nvme/fio_plugin.o 00:02:15.178 CC test/blobfs/mkfs/mkfs.o 00:02:15.178 CXX test/cpp_headers/dif.o 00:02:15.178 CC test/nvme/e2edp/nvme_dp.o 00:02:15.178 CC test/nvme/aer/aer.o 00:02:15.178 CC test/thread/poller_perf/poller_perf.o 00:02:15.178 CC test/nvme/err_injection/err_injection.o 00:02:15.178 CC test/event/reactor/reactor.o 00:02:15.178 CC test/env/memory/memory_ut.o 00:02:15.178 CC test/nvme/fused_ordering/fused_ordering.o 00:02:15.178 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:15.178 CC test/env/vtophys/vtophys.o 00:02:15.178 CC test/nvme/boot_partition/boot_partition.o 00:02:15.178 CC examples/thread/thread/thread_ex.o 00:02:15.178 CC test/nvme/overhead/overhead.o 00:02:15.178 CC test/nvme/sgl/sgl.o 00:02:15.178 CC test/app/histogram_perf/histogram_perf.o 00:02:15.178 CC test/nvme/reserve/reserve.o 00:02:15.178 CC test/app/jsoncat/jsoncat.o 00:02:15.178 CC test/nvme/simple_copy/simple_copy.o 00:02:15.178 CC test/nvme/connect_stress/connect_stress.o 00:02:15.178 CC test/nvme/startup/startup.o 00:02:15.178 CC examples/nvmf/nvmf/nvmf.o 00:02:15.178 CC test/accel/dif/dif.o 00:02:15.178 CC examples/blob/hello_world/hello_blob.o 00:02:15.178 CC test/env/pci/pci_ut.o 00:02:15.178 CC examples/bdev/hello_world/hello_bdev.o 00:02:15.178 CC test/bdev/bdevio/bdevio.o 00:02:15.178 CC test/dma/test_dma/test_dma.o 00:02:15.178 CC test/nvme/fdp/fdp.o 00:02:15.178 CC test/nvme/cuse/cuse.o 00:02:15.178 CC test/app/bdev_svc/bdev_svc.o 00:02:15.178 CC test/app/stub/stub.o 00:02:15.178 CC examples/blob/cli/blobcli.o 00:02:15.178 CC app/fio/bdev/fio_plugin.o 00:02:15.178 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:15.178 CC test/event/scheduler/scheduler.o 00:02:15.178 CC examples/bdev/bdevperf/bdevperf.o 00:02:15.178 CC test/nvme/compliance/nvme_compliance.o 00:02:15.178 CC test/lvol/esnap/esnap.o 00:02:15.178 CC test/env/mem_callbacks/mem_callbacks.o 00:02:15.178 LINK spdk_lspci 00:02:15.178 LINK rpc_client_test 00:02:15.464 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:15.464 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:15.464 LINK vhost 00:02:15.464 LINK interrupt_tgt 00:02:15.464 LINK app_repeat 00:02:15.464 LINK lsvmd 00:02:15.464 LINK reactor_perf 00:02:15.464 LINK event_perf 00:02:15.465 LINK spdk_trace_record 00:02:15.465 LINK spdk_nvme_discover 00:02:15.465 LINK spdk_tgt 00:02:15.465 LINK led 00:02:15.465 LINK jsoncat 00:02:15.465 LINK env_dpdk_post_init 00:02:15.465 LINK hello_world 00:02:15.465 LINK startup 00:02:15.465 LINK mkfs 00:02:15.465 LINK connect_stress 00:02:15.465 LINK zipf 00:02:15.465 LINK nvmf_tgt 00:02:15.465 LINK vtophys 00:02:15.465 LINK bdev_svc 00:02:15.465 LINK fused_ordering 00:02:15.465 LINK iscsi_tgt 00:02:15.465 LINK reactor 00:02:15.465 LINK poller_perf 00:02:15.465 LINK pmr_persistence 00:02:15.465 CXX test/cpp_headers/dma.o 00:02:15.465 CXX test/cpp_headers/endian.o 00:02:15.465 LINK hotplug 00:02:15.465 CXX test/cpp_headers/env_dpdk.o 00:02:15.465 CXX test/cpp_headers/env.o 00:02:15.465 LINK histogram_perf 00:02:15.465 LINK cmb_copy 00:02:15.465 LINK boot_partition 00:02:15.465 LINK thread 00:02:15.465 CXX test/cpp_headers/event.o 00:02:15.465 LINK spdk_dd 00:02:15.465 CXX test/cpp_headers/fd_group.o 00:02:15.465 LINK simple_copy 00:02:15.781 LINK hello_bdev 00:02:15.781 CXX test/cpp_headers/fd.o 00:02:15.781 CXX test/cpp_headers/file.o 00:02:15.781 LINK err_injection 00:02:15.781 LINK verify 00:02:15.781 CXX test/cpp_headers/ftl.o 00:02:15.781 LINK stub 00:02:15.781 CXX test/cpp_headers/gpt_spec.o 00:02:15.781 LINK doorbell_aers 00:02:15.781 LINK ioat_perf 00:02:15.781 LINK hello_sock 00:02:15.781 CXX test/cpp_headers/hexlify.o 00:02:15.781 LINK overhead 00:02:15.781 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:15.781 LINK aer 00:02:15.781 LINK reserve 00:02:15.781 LINK reset 00:02:15.781 LINK idxd_perf 00:02:15.781 LINK arbitration 00:02:15.781 LINK reconnect 00:02:15.781 CXX test/cpp_headers/histogram_data.o 00:02:15.781 LINK nvmf 00:02:15.781 CXX test/cpp_headers/idxd.o 00:02:15.781 CXX test/cpp_headers/idxd_spec.o 00:02:15.781 LINK scheduler 00:02:15.781 LINK hello_blob 00:02:15.781 CXX test/cpp_headers/init.o 00:02:15.781 CXX test/cpp_headers/ioat.o 00:02:15.781 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:15.781 CXX test/cpp_headers/ioat_spec.o 00:02:15.781 LINK abort 00:02:15.781 CXX test/cpp_headers/iscsi_spec.o 00:02:15.781 CXX test/cpp_headers/json.o 00:02:15.781 CXX test/cpp_headers/jsonrpc.o 00:02:15.781 CXX test/cpp_headers/likely.o 00:02:15.781 LINK sgl 00:02:15.781 LINK nvme_dp 00:02:15.781 CXX test/cpp_headers/log.o 00:02:15.781 CXX test/cpp_headers/lvol.o 00:02:15.781 CXX test/cpp_headers/memory.o 00:02:15.781 CXX test/cpp_headers/mmio.o 00:02:15.781 CXX test/cpp_headers/nbd.o 00:02:15.781 CXX test/cpp_headers/notify.o 00:02:15.781 CXX test/cpp_headers/nvme.o 00:02:15.781 CXX test/cpp_headers/nvme_ocssd.o 00:02:15.781 CXX test/cpp_headers/nvme_intel.o 00:02:15.781 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:15.781 CXX test/cpp_headers/nvme_zns.o 00:02:15.781 CXX test/cpp_headers/nvme_spec.o 00:02:15.781 CXX test/cpp_headers/nvmf_cmd.o 00:02:15.781 LINK dif 00:02:15.781 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:15.781 CXX test/cpp_headers/nvmf.o 00:02:15.781 CXX test/cpp_headers/nvmf_spec.o 00:02:15.781 LINK bdevio 00:02:15.781 CXX test/cpp_headers/nvmf_transport.o 00:02:15.781 CXX test/cpp_headers/opal.o 00:02:15.781 CXX test/cpp_headers/pci_ids.o 00:02:15.781 CXX test/cpp_headers/opal_spec.o 00:02:15.781 LINK nvme_compliance 00:02:15.781 CXX test/cpp_headers/pipe.o 00:02:15.781 LINK fdp 00:02:15.781 CXX test/cpp_headers/queue.o 00:02:15.781 CXX test/cpp_headers/reduce.o 00:02:15.781 CXX test/cpp_headers/rpc.o 00:02:15.781 CXX test/cpp_headers/scheduler.o 00:02:15.781 CXX test/cpp_headers/scsi.o 00:02:15.781 LINK nvme_manage 00:02:15.781 CXX test/cpp_headers/scsi_spec.o 00:02:15.781 CXX test/cpp_headers/sock.o 00:02:16.039 CXX test/cpp_headers/stdinc.o 00:02:16.039 CXX test/cpp_headers/string.o 00:02:16.040 CXX test/cpp_headers/thread.o 00:02:16.040 CXX test/cpp_headers/trace.o 00:02:16.040 CXX test/cpp_headers/trace_parser.o 00:02:16.040 CXX test/cpp_headers/tree.o 00:02:16.040 CXX test/cpp_headers/ublk.o 00:02:16.040 LINK accel_perf 00:02:16.040 CXX test/cpp_headers/uuid.o 00:02:16.040 CXX test/cpp_headers/util.o 00:02:16.040 CXX test/cpp_headers/version.o 00:02:16.040 LINK test_dma 00:02:16.040 CXX test/cpp_headers/vfio_user_pci.o 00:02:16.040 CXX test/cpp_headers/vfio_user_spec.o 00:02:16.040 CXX test/cpp_headers/vhost.o 00:02:16.040 CXX test/cpp_headers/xor.o 00:02:16.040 CXX test/cpp_headers/vmd.o 00:02:16.040 LINK spdk_trace 00:02:16.040 CXX test/cpp_headers/zipf.o 00:02:16.040 LINK nvme_fuzz 00:02:16.040 LINK pci_ut 00:02:16.040 LINK blobcli 00:02:16.040 LINK spdk_bdev 00:02:16.040 LINK spdk_nvme 00:02:16.040 LINK mem_callbacks 00:02:16.040 LINK spdk_top 00:02:16.298 LINK bdevperf 00:02:16.298 LINK spdk_nvme_perf 00:02:16.298 LINK spdk_nvme_identify 00:02:16.298 LINK memory_ut 00:02:16.298 LINK vhost_fuzz 00:02:16.556 LINK cuse 00:02:17.123 LINK iscsi_fuzz 00:02:19.026 LINK esnap 00:02:19.026 00:02:19.026 real 0m40.161s 00:02:19.026 user 6m10.280s 00:02:19.026 sys 3m5.046s 00:02:19.026 13:44:09 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:19.026 13:44:09 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.026 ************************************ 00:02:19.026 END TEST make 00:02:19.026 ************************************ 00:02:19.285 13:44:10 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:19.285 13:44:10 -- nvmf/common.sh@7 -- # uname -s 00:02:19.285 13:44:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:19.285 13:44:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:19.285 13:44:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:19.285 13:44:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:19.285 13:44:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:19.285 13:44:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:19.285 13:44:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:19.285 13:44:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:19.285 13:44:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:19.285 13:44:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:19.285 13:44:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:19.285 13:44:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:19.285 13:44:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:19.285 13:44:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:19.285 13:44:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:19.285 13:44:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:19.285 13:44:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:19.285 13:44:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:19.285 13:44:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:19.285 13:44:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.285 13:44:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.285 13:44:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.285 13:44:10 -- paths/export.sh@5 -- # export PATH 00:02:19.285 13:44:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:19.285 13:44:10 -- nvmf/common.sh@46 -- # : 0 00:02:19.285 13:44:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:19.285 13:44:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:19.285 13:44:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:19.285 13:44:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:19.285 13:44:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:19.285 13:44:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:19.285 13:44:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:19.285 13:44:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:19.286 13:44:10 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:19.286 13:44:10 -- spdk/autotest.sh@32 -- # uname -s 00:02:19.286 13:44:10 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:19.286 13:44:10 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:19.286 13:44:10 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.286 13:44:10 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:19.286 13:44:10 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:19.286 13:44:10 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:19.286 13:44:10 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:19.286 13:44:10 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:19.286 13:44:10 -- spdk/autotest.sh@48 -- # udevadm_pid=3028928 00:02:19.286 13:44:10 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:19.286 13:44:10 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:19.286 13:44:10 -- spdk/autotest.sh@54 -- # echo 3028930 00:02:19.286 13:44:10 -- spdk/autotest.sh@56 -- # echo 3028931 00:02:19.286 13:44:10 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:19.286 13:44:10 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:19.286 13:44:10 -- spdk/autotest.sh@60 -- # echo 3028932 00:02:19.286 13:44:10 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:19.286 13:44:10 -- spdk/autotest.sh@62 -- # echo 3028933 00:02:19.286 13:44:10 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.286 13:44:10 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:19.286 13:44:10 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:19.286 13:44:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:19.286 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:02:19.286 13:44:10 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:19.286 13:44:10 -- spdk/autotest.sh@70 -- # create_test_list 00:02:19.286 13:44:10 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:19.286 13:44:10 -- common/autotest_common.sh@10 -- # set +x 00:02:19.286 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:19.286 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:19.286 13:44:10 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:19.286 13:44:10 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.286 13:44:10 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.286 13:44:10 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.286 13:44:10 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.286 13:44:10 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:19.286 13:44:10 -- common/autotest_common.sh@1440 -- # uname 00:02:19.286 13:44:10 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:19.286 13:44:10 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:19.286 13:44:10 -- common/autotest_common.sh@1460 -- # uname 00:02:19.286 13:44:10 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:19.286 13:44:10 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:19.286 13:44:10 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:19.286 13:44:10 -- spdk/autotest.sh@83 -- # hash lcov 00:02:19.286 13:44:10 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:19.286 13:44:10 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:19.286 --rc lcov_branch_coverage=1 00:02:19.286 --rc lcov_function_coverage=1 00:02:19.286 --rc genhtml_branch_coverage=1 00:02:19.286 --rc genhtml_function_coverage=1 00:02:19.286 --rc genhtml_legend=1 00:02:19.286 --rc geninfo_all_blocks=1 00:02:19.286 ' 00:02:19.286 13:44:10 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:19.286 --rc lcov_branch_coverage=1 00:02:19.286 --rc lcov_function_coverage=1 00:02:19.286 --rc genhtml_branch_coverage=1 00:02:19.286 --rc genhtml_function_coverage=1 00:02:19.286 --rc genhtml_legend=1 00:02:19.286 --rc geninfo_all_blocks=1 00:02:19.286 ' 00:02:19.286 13:44:10 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:19.286 --rc lcov_branch_coverage=1 00:02:19.286 --rc lcov_function_coverage=1 00:02:19.286 --rc genhtml_branch_coverage=1 00:02:19.286 --rc genhtml_function_coverage=1 00:02:19.286 --rc genhtml_legend=1 00:02:19.286 --rc geninfo_all_blocks=1 00:02:19.286 --no-external' 00:02:19.286 13:44:10 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:19.286 --rc lcov_branch_coverage=1 00:02:19.286 --rc lcov_function_coverage=1 00:02:19.286 --rc genhtml_branch_coverage=1 00:02:19.286 --rc genhtml_function_coverage=1 00:02:19.286 --rc genhtml_legend=1 00:02:19.286 --rc geninfo_all_blocks=1 00:02:19.286 --no-external' 00:02:19.286 13:44:10 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:19.286 lcov: LCOV version 1.14 00:02:19.286 13:44:10 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:21.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:21.819 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:21.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:21.819 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:21.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:21.819 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:39.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:39.940 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:39.941 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:39.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:39.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:39.942 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:41.844 13:44:32 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:41.844 13:44:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:41.844 13:44:32 -- common/autotest_common.sh@10 -- # set +x 00:02:41.844 13:44:32 -- spdk/autotest.sh@102 -- # rm -f 00:02:41.844 13:44:32 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:45.129 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:45.129 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.129 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.129 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.129 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:45.129 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:45.129 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:45.129 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:45.130 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:45.130 13:44:35 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:45.130 13:44:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:45.130 13:44:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:45.130 13:44:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:45.130 13:44:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:45.130 13:44:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:45.130 13:44:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:45.130 13:44:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:45.130 13:44:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:45.130 13:44:35 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:45.130 13:44:35 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:45.130 13:44:35 -- spdk/autotest.sh@121 -- # grep -v p 00:02:45.130 13:44:35 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:45.130 13:44:35 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:45.130 13:44:35 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:45.130 13:44:35 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:45.130 13:44:35 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:45.130 No valid GPT data, bailing 00:02:45.130 13:44:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:45.130 13:44:35 -- scripts/common.sh@393 -- # pt= 00:02:45.130 13:44:35 -- scripts/common.sh@394 -- # return 1 00:02:45.130 13:44:35 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:45.130 1+0 records in 00:02:45.130 1+0 records out 00:02:45.130 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454885 s, 231 MB/s 00:02:45.130 13:44:35 -- spdk/autotest.sh@129 -- # sync 00:02:45.130 13:44:35 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:45.130 13:44:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:45.130 13:44:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.402 13:44:40 -- spdk/autotest.sh@135 -- # uname -s 00:02:50.402 13:44:40 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:50.402 13:44:40 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.402 13:44:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.402 13:44:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.402 13:44:40 -- common/autotest_common.sh@10 -- # set +x 00:02:50.402 ************************************ 00:02:50.402 START TEST setup.sh 00:02:50.402 ************************************ 00:02:50.402 13:44:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.402 * Looking for test storage... 00:02:50.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.402 13:44:41 -- setup/test-setup.sh@10 -- # uname -s 00:02:50.402 13:44:41 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.402 13:44:41 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.402 13:44:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:50.402 13:44:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:50.402 13:44:41 -- common/autotest_common.sh@10 -- # set +x 00:02:50.402 ************************************ 00:02:50.402 START TEST acl 00:02:50.402 ************************************ 00:02:50.402 13:44:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.402 * Looking for test storage... 00:02:50.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.402 13:44:41 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.402 13:44:41 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:50.402 13:44:41 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:50.402 13:44:41 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:50.402 13:44:41 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:50.402 13:44:41 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:50.402 13:44:41 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:50.402 13:44:41 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.402 13:44:41 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:50.402 13:44:41 -- setup/acl.sh@12 -- # devs=() 00:02:50.402 13:44:41 -- setup/acl.sh@12 -- # declare -a devs 00:02:50.402 13:44:41 -- setup/acl.sh@13 -- # drivers=() 00:02:50.402 13:44:41 -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.403 13:44:41 -- setup/acl.sh@51 -- # setup reset 00:02:50.403 13:44:41 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.403 13:44:41 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.743 13:44:44 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:53.743 13:44:44 -- setup/acl.sh@16 -- # local dev driver 00:02:53.743 13:44:44 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.743 13:44:44 -- setup/acl.sh@15 -- # setup output status 00:02:53.743 13:44:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.743 13:44:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:56.279 Hugepages 00:02:56.279 node hugesize free / total 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 00:02:56.279 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.279 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.279 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.279 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:56.280 13:44:46 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:56.280 13:44:46 -- setup/acl.sh@20 -- # continue 00:02:56.280 13:44:46 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:56.280 13:44:46 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:56.280 13:44:46 -- setup/acl.sh@54 -- # run_test denied denied 00:02:56.280 13:44:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:56.280 13:44:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:56.280 13:44:46 -- common/autotest_common.sh@10 -- # set +x 00:02:56.280 ************************************ 00:02:56.280 START TEST denied 00:02:56.280 ************************************ 00:02:56.280 13:44:46 -- common/autotest_common.sh@1104 -- # denied 00:02:56.280 13:44:46 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:02:56.280 13:44:46 -- setup/acl.sh@38 -- # setup output config 00:02:56.280 13:44:46 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:02:56.280 13:44:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:56.280 13:44:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:58.813 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:02:58.813 13:44:49 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:02:58.813 13:44:49 -- setup/acl.sh@28 -- # local dev driver 00:02:58.813 13:44:49 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:58.813 13:44:49 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:02:58.813 13:44:49 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:02:58.813 13:44:49 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:58.813 13:44:49 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:58.813 13:44:49 -- setup/acl.sh@41 -- # setup reset 00:02:58.813 13:44:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:58.813 13:44:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:02.997 00:03:02.997 real 0m6.616s 00:03:02.997 user 0m2.134s 00:03:02.997 sys 0m3.777s 00:03:02.997 13:44:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.997 13:44:53 -- common/autotest_common.sh@10 -- # set +x 00:03:02.997 ************************************ 00:03:02.997 END TEST denied 00:03:02.997 ************************************ 00:03:02.997 13:44:53 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:02.997 13:44:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:02.997 13:44:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:02.997 13:44:53 -- common/autotest_common.sh@10 -- # set +x 00:03:02.997 ************************************ 00:03:02.997 START TEST allowed 00:03:02.997 ************************************ 00:03:02.997 13:44:53 -- common/autotest_common.sh@1104 -- # allowed 00:03:02.997 13:44:53 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:02.997 13:44:53 -- setup/acl.sh@45 -- # setup output config 00:03:02.997 13:44:53 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:02.997 13:44:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.997 13:44:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:06.283 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:06.283 13:44:57 -- setup/acl.sh@47 -- # verify 00:03:06.283 13:44:57 -- setup/acl.sh@28 -- # local dev driver 00:03:06.283 13:44:57 -- setup/acl.sh@48 -- # setup reset 00:03:06.283 13:44:57 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:06.283 13:44:57 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.573 00:03:09.573 real 0m6.613s 00:03:09.573 user 0m1.922s 00:03:09.573 sys 0m3.836s 00:03:09.573 13:45:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.573 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.573 ************************************ 00:03:09.573 END TEST allowed 00:03:09.573 ************************************ 00:03:09.574 00:03:09.574 real 0m19.063s 00:03:09.574 user 0m6.233s 00:03:09.574 sys 0m11.476s 00:03:09.574 13:45:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.574 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.574 ************************************ 00:03:09.574 END TEST acl 00:03:09.574 ************************************ 00:03:09.574 13:45:00 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.574 13:45:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.574 13:45:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.574 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.574 ************************************ 00:03:09.574 START TEST hugepages 00:03:09.574 ************************************ 00:03:09.574 13:45:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:09.574 * Looking for test storage... 00:03:09.574 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:09.574 13:45:00 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:09.574 13:45:00 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:09.574 13:45:00 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:09.574 13:45:00 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:09.574 13:45:00 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:09.574 13:45:00 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:09.574 13:45:00 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:09.574 13:45:00 -- setup/common.sh@18 -- # local node= 00:03:09.574 13:45:00 -- setup/common.sh@19 -- # local var val 00:03:09.574 13:45:00 -- setup/common.sh@20 -- # local mem_f mem 00:03:09.574 13:45:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.574 13:45:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.574 13:45:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.574 13:45:00 -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.574 13:45:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168469500 kB' 'MemAvailable: 171699184 kB' 'Buffers: 3896 kB' 'Cached: 14586628 kB' 'SwapCached: 0 kB' 'Active: 11419520 kB' 'Inactive: 3694072 kB' 'Active(anon): 11001564 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526756 kB' 'Mapped: 182964 kB' 'Shmem: 10478496 kB' 'KReclaimable: 523744 kB' 'Slab: 1168504 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 644760 kB' 'KernelStack: 20784 kB' 'PageTables: 9516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12551728 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.574 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.574 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # continue 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # IFS=': ' 00:03:09.575 13:45:00 -- setup/common.sh@31 -- # read -r var val _ 00:03:09.575 13:45:00 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:09.575 13:45:00 -- setup/common.sh@33 -- # echo 2048 00:03:09.575 13:45:00 -- setup/common.sh@33 -- # return 0 00:03:09.575 13:45:00 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:09.575 13:45:00 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:09.575 13:45:00 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:09.575 13:45:00 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:09.575 13:45:00 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:09.575 13:45:00 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:09.575 13:45:00 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:09.575 13:45:00 -- setup/hugepages.sh@207 -- # get_nodes 00:03:09.575 13:45:00 -- setup/hugepages.sh@27 -- # local node 00:03:09.575 13:45:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.575 13:45:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:09.575 13:45:00 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.575 13:45:00 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:09.575 13:45:00 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.575 13:45:00 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.575 13:45:00 -- setup/hugepages.sh@208 -- # clear_hp 00:03:09.575 13:45:00 -- setup/hugepages.sh@37 -- # local node hp 00:03:09.575 13:45:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.575 13:45:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.575 13:45:00 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.575 13:45:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.575 13:45:00 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.575 13:45:00 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:09.575 13:45:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.575 13:45:00 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.575 13:45:00 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:09.575 13:45:00 -- setup/hugepages.sh@41 -- # echo 0 00:03:09.575 13:45:00 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:09.575 13:45:00 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:09.575 13:45:00 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:09.575 13:45:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:09.575 13:45:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:09.575 13:45:00 -- common/autotest_common.sh@10 -- # set +x 00:03:09.575 ************************************ 00:03:09.575 START TEST default_setup 00:03:09.576 ************************************ 00:03:09.576 13:45:00 -- common/autotest_common.sh@1104 -- # default_setup 00:03:09.576 13:45:00 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:09.576 13:45:00 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.576 13:45:00 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.576 13:45:00 -- setup/hugepages.sh@51 -- # shift 00:03:09.576 13:45:00 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.576 13:45:00 -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.576 13:45:00 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.576 13:45:00 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.576 13:45:00 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.576 13:45:00 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.576 13:45:00 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.576 13:45:00 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.576 13:45:00 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.576 13:45:00 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.576 13:45:00 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.576 13:45:00 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.576 13:45:00 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.576 13:45:00 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.576 13:45:00 -- setup/hugepages.sh@73 -- # return 0 00:03:09.576 13:45:00 -- setup/hugepages.sh@137 -- # setup output 00:03:09.576 13:45:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.576 13:45:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.113 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:12.113 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:12.682 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:12.944 13:45:03 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:12.944 13:45:03 -- setup/hugepages.sh@89 -- # local node 00:03:12.944 13:45:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.944 13:45:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.944 13:45:03 -- setup/hugepages.sh@92 -- # local surp 00:03:12.944 13:45:03 -- setup/hugepages.sh@93 -- # local resv 00:03:12.944 13:45:03 -- setup/hugepages.sh@94 -- # local anon 00:03:12.944 13:45:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.944 13:45:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.944 13:45:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.944 13:45:03 -- setup/common.sh@18 -- # local node= 00:03:12.944 13:45:03 -- setup/common.sh@19 -- # local var val 00:03:12.944 13:45:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.944 13:45:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.944 13:45:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.944 13:45:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.944 13:45:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.944 13:45:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170634896 kB' 'MemAvailable: 173864580 kB' 'Buffers: 3896 kB' 'Cached: 14586732 kB' 'SwapCached: 0 kB' 'Active: 11432452 kB' 'Inactive: 3694072 kB' 'Active(anon): 11014496 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539268 kB' 'Mapped: 183400 kB' 'Shmem: 10478600 kB' 'KReclaimable: 523744 kB' 'Slab: 1166588 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642844 kB' 'KernelStack: 20752 kB' 'PageTables: 9224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12569676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317212 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.944 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.944 13:45:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.945 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.945 13:45:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.945 13:45:03 -- setup/common.sh@33 -- # echo 0 00:03:12.945 13:45:03 -- setup/common.sh@33 -- # return 0 00:03:12.945 13:45:03 -- setup/hugepages.sh@97 -- # anon=0 00:03:12.945 13:45:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.945 13:45:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.945 13:45:03 -- setup/common.sh@18 -- # local node= 00:03:12.945 13:45:03 -- setup/common.sh@19 -- # local var val 00:03:12.945 13:45:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.945 13:45:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.945 13:45:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.945 13:45:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.945 13:45:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.946 13:45:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170644928 kB' 'MemAvailable: 173874612 kB' 'Buffers: 3896 kB' 'Cached: 14586736 kB' 'SwapCached: 0 kB' 'Active: 11432448 kB' 'Inactive: 3694072 kB' 'Active(anon): 11014492 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539284 kB' 'Mapped: 183372 kB' 'Shmem: 10478604 kB' 'KReclaimable: 523744 kB' 'Slab: 1166572 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642828 kB' 'KernelStack: 20736 kB' 'PageTables: 9204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12569688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317196 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.946 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.946 13:45:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.947 13:45:03 -- setup/common.sh@33 -- # echo 0 00:03:12.947 13:45:03 -- setup/common.sh@33 -- # return 0 00:03:12.947 13:45:03 -- setup/hugepages.sh@99 -- # surp=0 00:03:12.947 13:45:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.947 13:45:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.947 13:45:03 -- setup/common.sh@18 -- # local node= 00:03:12.947 13:45:03 -- setup/common.sh@19 -- # local var val 00:03:12.947 13:45:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.947 13:45:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.947 13:45:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.947 13:45:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.947 13:45:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.947 13:45:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170645432 kB' 'MemAvailable: 173875116 kB' 'Buffers: 3896 kB' 'Cached: 14586736 kB' 'SwapCached: 0 kB' 'Active: 11432488 kB' 'Inactive: 3694072 kB' 'Active(anon): 11014532 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539320 kB' 'Mapped: 183372 kB' 'Shmem: 10478604 kB' 'KReclaimable: 523744 kB' 'Slab: 1166572 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642828 kB' 'KernelStack: 20752 kB' 'PageTables: 9252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12569704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317196 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.947 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.947 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.948 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.948 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.949 13:45:03 -- setup/common.sh@33 -- # echo 0 00:03:12.949 13:45:03 -- setup/common.sh@33 -- # return 0 00:03:12.949 13:45:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:12.949 13:45:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.949 nr_hugepages=1024 00:03:12.949 13:45:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.949 resv_hugepages=0 00:03:12.949 13:45:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.949 surplus_hugepages=0 00:03:12.949 13:45:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.949 anon_hugepages=0 00:03:12.949 13:45:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.949 13:45:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.949 13:45:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.949 13:45:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.949 13:45:03 -- setup/common.sh@18 -- # local node= 00:03:12.949 13:45:03 -- setup/common.sh@19 -- # local var val 00:03:12.949 13:45:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.949 13:45:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.949 13:45:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.949 13:45:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.949 13:45:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.949 13:45:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170646164 kB' 'MemAvailable: 173875848 kB' 'Buffers: 3896 kB' 'Cached: 14586772 kB' 'SwapCached: 0 kB' 'Active: 11432132 kB' 'Inactive: 3694072 kB' 'Active(anon): 11014176 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538892 kB' 'Mapped: 183372 kB' 'Shmem: 10478640 kB' 'KReclaimable: 523744 kB' 'Slab: 1166572 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642828 kB' 'KernelStack: 20720 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12569720 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317196 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.949 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.949 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.950 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.950 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.950 13:45:03 -- setup/common.sh@33 -- # echo 1024 00:03:12.950 13:45:03 -- setup/common.sh@33 -- # return 0 00:03:12.950 13:45:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.950 13:45:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.951 13:45:03 -- setup/hugepages.sh@27 -- # local node 00:03:12.951 13:45:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.951 13:45:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.951 13:45:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.951 13:45:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.951 13:45:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.951 13:45:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.951 13:45:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.951 13:45:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.951 13:45:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.951 13:45:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.951 13:45:03 -- setup/common.sh@18 -- # local node=0 00:03:12.951 13:45:03 -- setup/common.sh@19 -- # local var val 00:03:12.951 13:45:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:12.951 13:45:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.951 13:45:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.951 13:45:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.951 13:45:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.951 13:45:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91386832 kB' 'MemUsed: 6228796 kB' 'SwapCached: 0 kB' 'Active: 2505520 kB' 'Inactive: 216924 kB' 'Active(anon): 2343696 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2558676 kB' 'Mapped: 66448 kB' 'AnonPages: 166960 kB' 'Shmem: 2179928 kB' 'KernelStack: 11416 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 652372 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 301692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.951 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.951 13:45:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # continue 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:12.952 13:45:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:12.952 13:45:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.952 13:45:03 -- setup/common.sh@33 -- # echo 0 00:03:12.952 13:45:03 -- setup/common.sh@33 -- # return 0 00:03:12.952 13:45:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.952 13:45:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.952 13:45:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.952 13:45:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.952 13:45:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.952 node0=1024 expecting 1024 00:03:12.952 13:45:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.952 00:03:12.952 real 0m3.562s 00:03:12.952 user 0m0.985s 00:03:12.952 sys 0m1.569s 00:03:12.952 13:45:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.952 13:45:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.952 ************************************ 00:03:12.952 END TEST default_setup 00:03:12.952 ************************************ 00:03:12.952 13:45:03 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:12.952 13:45:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.952 13:45:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.952 13:45:03 -- common/autotest_common.sh@10 -- # set +x 00:03:12.952 ************************************ 00:03:12.952 START TEST per_node_1G_alloc 00:03:12.952 ************************************ 00:03:12.952 13:45:03 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:12.952 13:45:03 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:12.952 13:45:03 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:12.952 13:45:03 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:12.952 13:45:03 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:12.952 13:45:03 -- setup/hugepages.sh@51 -- # shift 00:03:12.952 13:45:03 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:12.952 13:45:03 -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.952 13:45:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.952 13:45:03 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:12.952 13:45:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:12.952 13:45:03 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:12.952 13:45:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.952 13:45:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:12.952 13:45:03 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.952 13:45:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.952 13:45:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.952 13:45:03 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:12.952 13:45:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.952 13:45:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.952 13:45:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.952 13:45:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:12.952 13:45:03 -- setup/hugepages.sh@73 -- # return 0 00:03:12.952 13:45:03 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:12.952 13:45:03 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:12.952 13:45:03 -- setup/hugepages.sh@146 -- # setup output 00:03:12.952 13:45:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.952 13:45:03 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.545 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.545 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:15.545 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:15.545 13:45:06 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:15.545 13:45:06 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:15.546 13:45:06 -- setup/hugepages.sh@89 -- # local node 00:03:15.546 13:45:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.546 13:45:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.546 13:45:06 -- setup/hugepages.sh@92 -- # local surp 00:03:15.546 13:45:06 -- setup/hugepages.sh@93 -- # local resv 00:03:15.546 13:45:06 -- setup/hugepages.sh@94 -- # local anon 00:03:15.546 13:45:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.546 13:45:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.546 13:45:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.546 13:45:06 -- setup/common.sh@18 -- # local node= 00:03:15.546 13:45:06 -- setup/common.sh@19 -- # local var val 00:03:15.546 13:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.546 13:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.546 13:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.546 13:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.546 13:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.546 13:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.546 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.546 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.546 13:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170682020 kB' 'MemAvailable: 173911704 kB' 'Buffers: 3896 kB' 'Cached: 14586844 kB' 'SwapCached: 0 kB' 'Active: 11433432 kB' 'Inactive: 3694072 kB' 'Active(anon): 11015476 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539968 kB' 'Mapped: 183372 kB' 'Shmem: 10478712 kB' 'KReclaimable: 523744 kB' 'Slab: 1167100 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643356 kB' 'KernelStack: 20656 kB' 'PageTables: 9104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12570188 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317340 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:15.546 13:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.546 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.546 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.809 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.809 13:45:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.809 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.809 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.809 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.809 13:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.809 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.809 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.809 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.809 13:45:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.809 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.810 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.810 13:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.810 13:45:06 -- setup/common.sh@33 -- # echo 0 00:03:15.810 13:45:06 -- setup/common.sh@33 -- # return 0 00:03:15.810 13:45:06 -- setup/hugepages.sh@97 -- # anon=0 00:03:15.810 13:45:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.810 13:45:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.810 13:45:06 -- setup/common.sh@18 -- # local node= 00:03:15.810 13:45:06 -- setup/common.sh@19 -- # local var val 00:03:15.810 13:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.810 13:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.811 13:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.811 13:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.811 13:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.811 13:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170685192 kB' 'MemAvailable: 173914876 kB' 'Buffers: 3896 kB' 'Cached: 14586848 kB' 'SwapCached: 0 kB' 'Active: 11433200 kB' 'Inactive: 3694072 kB' 'Active(anon): 11015244 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539832 kB' 'Mapped: 183332 kB' 'Shmem: 10478716 kB' 'KReclaimable: 523744 kB' 'Slab: 1167180 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643436 kB' 'KernelStack: 20752 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12570200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.811 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.811 13:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.812 13:45:06 -- setup/common.sh@33 -- # echo 0 00:03:15.812 13:45:06 -- setup/common.sh@33 -- # return 0 00:03:15.812 13:45:06 -- setup/hugepages.sh@99 -- # surp=0 00:03:15.812 13:45:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.812 13:45:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.812 13:45:06 -- setup/common.sh@18 -- # local node= 00:03:15.812 13:45:06 -- setup/common.sh@19 -- # local var val 00:03:15.812 13:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.812 13:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.812 13:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.812 13:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.812 13:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.812 13:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170685192 kB' 'MemAvailable: 173914876 kB' 'Buffers: 3896 kB' 'Cached: 14586860 kB' 'SwapCached: 0 kB' 'Active: 11433224 kB' 'Inactive: 3694072 kB' 'Active(anon): 11015268 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539840 kB' 'Mapped: 183332 kB' 'Shmem: 10478728 kB' 'KReclaimable: 523744 kB' 'Slab: 1167180 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643436 kB' 'KernelStack: 20752 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12570212 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.812 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.812 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.813 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.813 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.813 13:45:06 -- setup/common.sh@33 -- # echo 0 00:03:15.814 13:45:06 -- setup/common.sh@33 -- # return 0 00:03:15.814 13:45:06 -- setup/hugepages.sh@100 -- # resv=0 00:03:15.814 13:45:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.814 nr_hugepages=1024 00:03:15.814 13:45:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.814 resv_hugepages=0 00:03:15.814 13:45:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.814 surplus_hugepages=0 00:03:15.814 13:45:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.814 anon_hugepages=0 00:03:15.814 13:45:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.814 13:45:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.814 13:45:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.814 13:45:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.814 13:45:06 -- setup/common.sh@18 -- # local node= 00:03:15.814 13:45:06 -- setup/common.sh@19 -- # local var val 00:03:15.814 13:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.814 13:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.814 13:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.814 13:45:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.814 13:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.814 13:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170685192 kB' 'MemAvailable: 173914876 kB' 'Buffers: 3896 kB' 'Cached: 14586876 kB' 'SwapCached: 0 kB' 'Active: 11433260 kB' 'Inactive: 3694072 kB' 'Active(anon): 11015304 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539844 kB' 'Mapped: 183332 kB' 'Shmem: 10478744 kB' 'KReclaimable: 523744 kB' 'Slab: 1167180 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643436 kB' 'KernelStack: 20752 kB' 'PageTables: 9236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12570228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317308 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.814 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.814 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.815 13:45:06 -- setup/common.sh@33 -- # echo 1024 00:03:15.815 13:45:06 -- setup/common.sh@33 -- # return 0 00:03:15.815 13:45:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.815 13:45:06 -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.815 13:45:06 -- setup/hugepages.sh@27 -- # local node 00:03:15.815 13:45:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.815 13:45:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.815 13:45:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.815 13:45:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.815 13:45:06 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.815 13:45:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.815 13:45:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.815 13:45:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.815 13:45:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.815 13:45:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.815 13:45:06 -- setup/common.sh@18 -- # local node=0 00:03:15.815 13:45:06 -- setup/common.sh@19 -- # local var val 00:03:15.815 13:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.815 13:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.815 13:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.815 13:45:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.815 13:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.815 13:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92461904 kB' 'MemUsed: 5153724 kB' 'SwapCached: 0 kB' 'Active: 2505980 kB' 'Inactive: 216924 kB' 'Active(anon): 2344156 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2558688 kB' 'Mapped: 66408 kB' 'AnonPages: 167388 kB' 'Shmem: 2179940 kB' 'KernelStack: 11448 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 653144 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 302464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.815 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.815 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.816 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.816 13:45:06 -- setup/common.sh@33 -- # echo 0 00:03:15.816 13:45:06 -- setup/common.sh@33 -- # return 0 00:03:15.816 13:45:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.816 13:45:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.816 13:45:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.816 13:45:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.816 13:45:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.816 13:45:06 -- setup/common.sh@18 -- # local node=1 00:03:15.816 13:45:06 -- setup/common.sh@19 -- # local var val 00:03:15.816 13:45:06 -- setup/common.sh@20 -- # local mem_f mem 00:03:15.816 13:45:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.816 13:45:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.816 13:45:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.816 13:45:06 -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.816 13:45:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.816 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78223572 kB' 'MemUsed: 15541936 kB' 'SwapCached: 0 kB' 'Active: 8927248 kB' 'Inactive: 3477148 kB' 'Active(anon): 8671116 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12032108 kB' 'Mapped: 116924 kB' 'AnonPages: 372416 kB' 'Shmem: 8298828 kB' 'KernelStack: 9304 kB' 'PageTables: 4868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173064 kB' 'Slab: 514036 kB' 'SReclaimable: 173064 kB' 'SUnreclaim: 340972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # continue 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # IFS=': ' 00:03:15.817 13:45:06 -- setup/common.sh@31 -- # read -r var val _ 00:03:15.817 13:45:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.818 13:45:06 -- setup/common.sh@33 -- # echo 0 00:03:15.818 13:45:06 -- setup/common.sh@33 -- # return 0 00:03:15.818 13:45:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.818 13:45:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.818 13:45:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.818 13:45:06 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.818 node0=512 expecting 512 00:03:15.818 13:45:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.818 13:45:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.818 13:45:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.818 13:45:06 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.818 node1=512 expecting 512 00:03:15.818 13:45:06 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.818 00:03:15.818 real 0m2.794s 00:03:15.818 user 0m1.095s 00:03:15.818 sys 0m1.698s 00:03:15.818 13:45:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.818 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.818 ************************************ 00:03:15.818 END TEST per_node_1G_alloc 00:03:15.818 ************************************ 00:03:15.818 13:45:06 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:15.818 13:45:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.818 13:45:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.818 13:45:06 -- common/autotest_common.sh@10 -- # set +x 00:03:15.818 ************************************ 00:03:15.818 START TEST even_2G_alloc 00:03:15.818 ************************************ 00:03:15.818 13:45:06 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:15.818 13:45:06 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:15.818 13:45:06 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.818 13:45:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.818 13:45:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.818 13:45:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.818 13:45:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.818 13:45:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.818 13:45:06 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.818 13:45:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.818 13:45:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.818 13:45:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.818 13:45:06 -- setup/hugepages.sh@83 -- # : 512 00:03:15.818 13:45:06 -- setup/hugepages.sh@84 -- # : 1 00:03:15.818 13:45:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.818 13:45:06 -- setup/hugepages.sh@83 -- # : 0 00:03:15.818 13:45:06 -- setup/hugepages.sh@84 -- # : 0 00:03:15.818 13:45:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.818 13:45:06 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:15.818 13:45:06 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:15.818 13:45:06 -- setup/hugepages.sh@153 -- # setup output 00:03:15.818 13:45:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.818 13:45:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.114 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.114 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:19.114 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:19.114 13:45:09 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:19.114 13:45:09 -- setup/hugepages.sh@89 -- # local node 00:03:19.114 13:45:09 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:19.114 13:45:09 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:19.114 13:45:09 -- setup/hugepages.sh@92 -- # local surp 00:03:19.114 13:45:09 -- setup/hugepages.sh@93 -- # local resv 00:03:19.114 13:45:09 -- setup/hugepages.sh@94 -- # local anon 00:03:19.114 13:45:09 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:19.114 13:45:09 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:19.114 13:45:09 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:19.114 13:45:09 -- setup/common.sh@18 -- # local node= 00:03:19.114 13:45:09 -- setup/common.sh@19 -- # local var val 00:03:19.114 13:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.114 13:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.114 13:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.114 13:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.114 13:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.114 13:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170678156 kB' 'MemAvailable: 173907840 kB' 'Buffers: 3896 kB' 'Cached: 14586956 kB' 'SwapCached: 0 kB' 'Active: 11427876 kB' 'Inactive: 3694072 kB' 'Active(anon): 11009920 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534364 kB' 'Mapped: 182056 kB' 'Shmem: 10478824 kB' 'KReclaimable: 523744 kB' 'Slab: 1166876 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643132 kB' 'KernelStack: 20816 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12554360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317320 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.114 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.114 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:19.115 13:45:09 -- setup/common.sh@33 -- # echo 0 00:03:19.115 13:45:09 -- setup/common.sh@33 -- # return 0 00:03:19.115 13:45:09 -- setup/hugepages.sh@97 -- # anon=0 00:03:19.115 13:45:09 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:19.115 13:45:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.115 13:45:09 -- setup/common.sh@18 -- # local node= 00:03:19.115 13:45:09 -- setup/common.sh@19 -- # local var val 00:03:19.115 13:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.115 13:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.115 13:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.115 13:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.115 13:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.115 13:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.115 13:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170678816 kB' 'MemAvailable: 173908500 kB' 'Buffers: 3896 kB' 'Cached: 14586960 kB' 'SwapCached: 0 kB' 'Active: 11431376 kB' 'Inactive: 3694072 kB' 'Active(anon): 11013420 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537672 kB' 'Mapped: 182340 kB' 'Shmem: 10478828 kB' 'KReclaimable: 523744 kB' 'Slab: 1166916 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643172 kB' 'KernelStack: 20864 kB' 'PageTables: 9364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12568176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317404 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.115 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.115 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.116 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.116 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.117 13:45:09 -- setup/common.sh@33 -- # echo 0 00:03:19.117 13:45:09 -- setup/common.sh@33 -- # return 0 00:03:19.117 13:45:09 -- setup/hugepages.sh@99 -- # surp=0 00:03:19.117 13:45:09 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:19.117 13:45:09 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:19.117 13:45:09 -- setup/common.sh@18 -- # local node= 00:03:19.117 13:45:09 -- setup/common.sh@19 -- # local var val 00:03:19.117 13:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.117 13:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.117 13:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.117 13:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.117 13:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.117 13:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170698704 kB' 'MemAvailable: 173928388 kB' 'Buffers: 3896 kB' 'Cached: 14586960 kB' 'SwapCached: 0 kB' 'Active: 11426340 kB' 'Inactive: 3694072 kB' 'Active(anon): 11008384 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532828 kB' 'Mapped: 181836 kB' 'Shmem: 10478828 kB' 'KReclaimable: 523744 kB' 'Slab: 1166916 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643172 kB' 'KernelStack: 21024 kB' 'PageTables: 9740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12554368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317352 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.117 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.117 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:19.118 13:45:09 -- setup/common.sh@33 -- # echo 0 00:03:19.118 13:45:09 -- setup/common.sh@33 -- # return 0 00:03:19.118 13:45:09 -- setup/hugepages.sh@100 -- # resv=0 00:03:19.118 13:45:09 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:19.118 nr_hugepages=1024 00:03:19.118 13:45:09 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:19.118 resv_hugepages=0 00:03:19.118 13:45:09 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:19.118 surplus_hugepages=0 00:03:19.118 13:45:09 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:19.118 anon_hugepages=0 00:03:19.118 13:45:09 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.118 13:45:09 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:19.118 13:45:09 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:19.118 13:45:09 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:19.118 13:45:09 -- setup/common.sh@18 -- # local node= 00:03:19.118 13:45:09 -- setup/common.sh@19 -- # local var val 00:03:19.118 13:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.118 13:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.118 13:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:19.118 13:45:09 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:19.118 13:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.118 13:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170697400 kB' 'MemAvailable: 173927084 kB' 'Buffers: 3896 kB' 'Cached: 14586992 kB' 'SwapCached: 0 kB' 'Active: 11428500 kB' 'Inactive: 3694072 kB' 'Active(anon): 11010544 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534920 kB' 'Mapped: 182000 kB' 'Shmem: 10478860 kB' 'KReclaimable: 523744 kB' 'Slab: 1166924 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643180 kB' 'KernelStack: 20832 kB' 'PageTables: 9348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12556440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317304 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.118 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.118 13:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.119 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.119 13:45:09 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:19.120 13:45:09 -- setup/common.sh@33 -- # echo 1024 00:03:19.120 13:45:09 -- setup/common.sh@33 -- # return 0 00:03:19.120 13:45:09 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:19.120 13:45:09 -- setup/hugepages.sh@112 -- # get_nodes 00:03:19.120 13:45:09 -- setup/hugepages.sh@27 -- # local node 00:03:19.120 13:45:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.120 13:45:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.120 13:45:09 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:19.120 13:45:09 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:19.120 13:45:09 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:19.120 13:45:09 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:19.120 13:45:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.120 13:45:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.120 13:45:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:19.120 13:45:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.120 13:45:09 -- setup/common.sh@18 -- # local node=0 00:03:19.120 13:45:09 -- setup/common.sh@19 -- # local var val 00:03:19.120 13:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.120 13:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.120 13:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:19.120 13:45:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:19.120 13:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.120 13:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92469732 kB' 'MemUsed: 5145896 kB' 'SwapCached: 0 kB' 'Active: 2509240 kB' 'Inactive: 216924 kB' 'Active(anon): 2347416 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2558724 kB' 'Mapped: 66096 kB' 'AnonPages: 170628 kB' 'Shmem: 2179976 kB' 'KernelStack: 11416 kB' 'PageTables: 4036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 653052 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 302372 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.120 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.120 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@33 -- # echo 0 00:03:19.121 13:45:09 -- setup/common.sh@33 -- # return 0 00:03:19.121 13:45:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.121 13:45:09 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:19.121 13:45:09 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:19.121 13:45:09 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:19.121 13:45:09 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:19.121 13:45:09 -- setup/common.sh@18 -- # local node=1 00:03:19.121 13:45:09 -- setup/common.sh@19 -- # local var val 00:03:19.121 13:45:09 -- setup/common.sh@20 -- # local mem_f mem 00:03:19.121 13:45:09 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:19.121 13:45:09 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:19.121 13:45:09 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:19.121 13:45:09 -- setup/common.sh@28 -- # mapfile -t mem 00:03:19.121 13:45:09 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78221304 kB' 'MemUsed: 15544204 kB' 'SwapCached: 0 kB' 'Active: 8922016 kB' 'Inactive: 3477148 kB' 'Active(anon): 8665884 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12032196 kB' 'Mapped: 115628 kB' 'AnonPages: 366832 kB' 'Shmem: 8298916 kB' 'KernelStack: 9368 kB' 'PageTables: 4900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173064 kB' 'Slab: 513872 kB' 'SReclaimable: 173064 kB' 'SUnreclaim: 340808 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.121 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.121 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # continue 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # IFS=': ' 00:03:19.122 13:45:09 -- setup/common.sh@31 -- # read -r var val _ 00:03:19.122 13:45:09 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:19.122 13:45:09 -- setup/common.sh@33 -- # echo 0 00:03:19.122 13:45:09 -- setup/common.sh@33 -- # return 0 00:03:19.122 13:45:09 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.122 13:45:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.122 13:45:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.122 13:45:09 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:19.122 node0=512 expecting 512 00:03:19.122 13:45:09 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:19.122 13:45:09 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:19.122 13:45:09 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:19.122 13:45:09 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:19.122 node1=512 expecting 512 00:03:19.122 13:45:09 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:19.122 00:03:19.122 real 0m2.929s 00:03:19.122 user 0m1.198s 00:03:19.122 sys 0m1.789s 00:03:19.122 13:45:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.122 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:03:19.122 ************************************ 00:03:19.122 END TEST even_2G_alloc 00:03:19.122 ************************************ 00:03:19.122 13:45:09 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:19.122 13:45:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:19.122 13:45:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:19.122 13:45:09 -- common/autotest_common.sh@10 -- # set +x 00:03:19.122 ************************************ 00:03:19.122 START TEST odd_alloc 00:03:19.122 ************************************ 00:03:19.122 13:45:09 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:19.122 13:45:09 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:19.122 13:45:09 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:19.122 13:45:09 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:19.122 13:45:09 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:19.122 13:45:09 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:19.122 13:45:09 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:19.122 13:45:09 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:19.122 13:45:09 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:19.122 13:45:09 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:19.122 13:45:09 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:19.122 13:45:09 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:19.122 13:45:09 -- setup/hugepages.sh@83 -- # : 513 00:03:19.122 13:45:09 -- setup/hugepages.sh@84 -- # : 1 00:03:19.122 13:45:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:19.122 13:45:09 -- setup/hugepages.sh@83 -- # : 0 00:03:19.122 13:45:09 -- setup/hugepages.sh@84 -- # : 0 00:03:19.122 13:45:09 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:19.122 13:45:09 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:19.122 13:45:09 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:19.122 13:45:09 -- setup/hugepages.sh@160 -- # setup output 00:03:19.122 13:45:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.122 13:45:09 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.662 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.662 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:21.662 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:21.662 13:45:12 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:21.662 13:45:12 -- setup/hugepages.sh@89 -- # local node 00:03:21.662 13:45:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.662 13:45:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.662 13:45:12 -- setup/hugepages.sh@92 -- # local surp 00:03:21.662 13:45:12 -- setup/hugepages.sh@93 -- # local resv 00:03:21.662 13:45:12 -- setup/hugepages.sh@94 -- # local anon 00:03:21.662 13:45:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.662 13:45:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.663 13:45:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.663 13:45:12 -- setup/common.sh@18 -- # local node= 00:03:21.663 13:45:12 -- setup/common.sh@19 -- # local var val 00:03:21.663 13:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.663 13:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.663 13:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.663 13:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.663 13:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.663 13:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170676172 kB' 'MemAvailable: 173905856 kB' 'Buffers: 3896 kB' 'Cached: 14587076 kB' 'SwapCached: 0 kB' 'Active: 11425148 kB' 'Inactive: 3694072 kB' 'Active(anon): 11007192 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531716 kB' 'Mapped: 181504 kB' 'Shmem: 10478944 kB' 'KReclaimable: 523744 kB' 'Slab: 1166448 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642704 kB' 'KernelStack: 20640 kB' 'PageTables: 8916 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12550704 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.663 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.663 13:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.664 13:45:12 -- setup/common.sh@33 -- # echo 0 00:03:21.664 13:45:12 -- setup/common.sh@33 -- # return 0 00:03:21.664 13:45:12 -- setup/hugepages.sh@97 -- # anon=0 00:03:21.664 13:45:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.664 13:45:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.664 13:45:12 -- setup/common.sh@18 -- # local node= 00:03:21.664 13:45:12 -- setup/common.sh@19 -- # local var val 00:03:21.664 13:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.664 13:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.664 13:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.664 13:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.664 13:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.664 13:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.664 13:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170678420 kB' 'MemAvailable: 173908104 kB' 'Buffers: 3896 kB' 'Cached: 14587080 kB' 'SwapCached: 0 kB' 'Active: 11424876 kB' 'Inactive: 3694072 kB' 'Active(anon): 11006920 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531352 kB' 'Mapped: 181500 kB' 'Shmem: 10478948 kB' 'KReclaimable: 523744 kB' 'Slab: 1166448 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642704 kB' 'KernelStack: 20752 kB' 'PageTables: 8648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12550716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.664 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.664 13:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.665 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.665 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.665 13:45:12 -- setup/common.sh@33 -- # echo 0 00:03:21.665 13:45:12 -- setup/common.sh@33 -- # return 0 00:03:21.665 13:45:12 -- setup/hugepages.sh@99 -- # surp=0 00:03:21.665 13:45:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.665 13:45:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.665 13:45:12 -- setup/common.sh@18 -- # local node= 00:03:21.665 13:45:12 -- setup/common.sh@19 -- # local var val 00:03:21.665 13:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.665 13:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.665 13:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.665 13:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.665 13:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.665 13:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.666 13:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170679400 kB' 'MemAvailable: 173909084 kB' 'Buffers: 3896 kB' 'Cached: 14587092 kB' 'SwapCached: 0 kB' 'Active: 11424940 kB' 'Inactive: 3694072 kB' 'Active(anon): 11006984 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531376 kB' 'Mapped: 181500 kB' 'Shmem: 10478960 kB' 'KReclaimable: 523744 kB' 'Slab: 1166436 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642692 kB' 'KernelStack: 20752 kB' 'PageTables: 8592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12550732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.666 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.666 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.667 13:45:12 -- setup/common.sh@33 -- # echo 0 00:03:21.667 13:45:12 -- setup/common.sh@33 -- # return 0 00:03:21.667 13:45:12 -- setup/hugepages.sh@100 -- # resv=0 00:03:21.667 13:45:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:21.667 nr_hugepages=1025 00:03:21.667 13:45:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.667 resv_hugepages=0 00:03:21.667 13:45:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.667 surplus_hugepages=0 00:03:21.667 13:45:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.667 anon_hugepages=0 00:03:21.667 13:45:12 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.667 13:45:12 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:21.667 13:45:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.667 13:45:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.667 13:45:12 -- setup/common.sh@18 -- # local node= 00:03:21.667 13:45:12 -- setup/common.sh@19 -- # local var val 00:03:21.667 13:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.667 13:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.667 13:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.667 13:45:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.667 13:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.667 13:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170678624 kB' 'MemAvailable: 173908308 kB' 'Buffers: 3896 kB' 'Cached: 14587104 kB' 'SwapCached: 0 kB' 'Active: 11425116 kB' 'Inactive: 3694072 kB' 'Active(anon): 11007160 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531544 kB' 'Mapped: 181500 kB' 'Shmem: 10478972 kB' 'KReclaimable: 523744 kB' 'Slab: 1166436 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642692 kB' 'KernelStack: 20768 kB' 'PageTables: 8588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12550748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317256 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.667 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.667 13:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.668 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.668 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.669 13:45:12 -- setup/common.sh@33 -- # echo 1025 00:03:21.669 13:45:12 -- setup/common.sh@33 -- # return 0 00:03:21.669 13:45:12 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:21.669 13:45:12 -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.669 13:45:12 -- setup/hugepages.sh@27 -- # local node 00:03:21.669 13:45:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.669 13:45:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:21.669 13:45:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.669 13:45:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:21.669 13:45:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.669 13:45:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.669 13:45:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.669 13:45:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.669 13:45:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.669 13:45:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.669 13:45:12 -- setup/common.sh@18 -- # local node=0 00:03:21.669 13:45:12 -- setup/common.sh@19 -- # local var val 00:03:21.669 13:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.669 13:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.669 13:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.669 13:45:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.669 13:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.669 13:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92461912 kB' 'MemUsed: 5153716 kB' 'SwapCached: 0 kB' 'Active: 2504140 kB' 'Inactive: 216924 kB' 'Active(anon): 2342316 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2558860 kB' 'Mapped: 65944 kB' 'AnonPages: 165540 kB' 'Shmem: 2180112 kB' 'KernelStack: 11352 kB' 'PageTables: 3908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 652752 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 302072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.669 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.669 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@33 -- # echo 0 00:03:21.670 13:45:12 -- setup/common.sh@33 -- # return 0 00:03:21.670 13:45:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.670 13:45:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.670 13:45:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.670 13:45:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:21.670 13:45:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.670 13:45:12 -- setup/common.sh@18 -- # local node=1 00:03:21.670 13:45:12 -- setup/common.sh@19 -- # local var val 00:03:21.670 13:45:12 -- setup/common.sh@20 -- # local mem_f mem 00:03:21.670 13:45:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.670 13:45:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:21.670 13:45:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:21.670 13:45:12 -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.670 13:45:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 78216360 kB' 'MemUsed: 15549148 kB' 'SwapCached: 0 kB' 'Active: 8920644 kB' 'Inactive: 3477148 kB' 'Active(anon): 8664512 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12032160 kB' 'Mapped: 115556 kB' 'AnonPages: 365692 kB' 'Shmem: 8298880 kB' 'KernelStack: 9384 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173064 kB' 'Slab: 513684 kB' 'SReclaimable: 173064 kB' 'SUnreclaim: 340620 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.670 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.670 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # continue 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # IFS=': ' 00:03:21.671 13:45:12 -- setup/common.sh@31 -- # read -r var val _ 00:03:21.671 13:45:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.671 13:45:12 -- setup/common.sh@33 -- # echo 0 00:03:21.671 13:45:12 -- setup/common.sh@33 -- # return 0 00:03:21.671 13:45:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.671 13:45:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.671 13:45:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.671 13:45:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:21.671 node0=512 expecting 513 00:03:21.671 13:45:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.671 13:45:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.671 13:45:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.671 13:45:12 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:21.671 node1=513 expecting 512 00:03:21.671 13:45:12 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:21.671 00:03:21.671 real 0m2.714s 00:03:21.671 user 0m1.046s 00:03:21.671 sys 0m1.616s 00:03:21.671 13:45:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.671 13:45:12 -- common/autotest_common.sh@10 -- # set +x 00:03:21.671 ************************************ 00:03:21.671 END TEST odd_alloc 00:03:21.671 ************************************ 00:03:21.671 13:45:12 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:21.671 13:45:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:21.671 13:45:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:21.671 13:45:12 -- common/autotest_common.sh@10 -- # set +x 00:03:21.671 ************************************ 00:03:21.671 START TEST custom_alloc 00:03:21.671 ************************************ 00:03:21.671 13:45:12 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:21.671 13:45:12 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:21.671 13:45:12 -- setup/hugepages.sh@169 -- # local node 00:03:21.671 13:45:12 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:21.671 13:45:12 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:21.671 13:45:12 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:21.671 13:45:12 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:21.671 13:45:12 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:21.671 13:45:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:21.671 13:45:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.671 13:45:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.671 13:45:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.671 13:45:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:21.671 13:45:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.671 13:45:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.671 13:45:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.671 13:45:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.671 13:45:12 -- setup/hugepages.sh@83 -- # : 256 00:03:21.671 13:45:12 -- setup/hugepages.sh@84 -- # : 1 00:03:21.671 13:45:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:21.671 13:45:12 -- setup/hugepages.sh@83 -- # : 0 00:03:21.671 13:45:12 -- setup/hugepages.sh@84 -- # : 0 00:03:21.671 13:45:12 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:21.671 13:45:12 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:21.671 13:45:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:21.671 13:45:12 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:21.671 13:45:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:21.671 13:45:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.671 13:45:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.671 13:45:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.671 13:45:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.671 13:45:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.671 13:45:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.671 13:45:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.671 13:45:12 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:21.672 13:45:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.672 13:45:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.672 13:45:12 -- setup/hugepages.sh@78 -- # return 0 00:03:21.672 13:45:12 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:21.672 13:45:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.672 13:45:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.672 13:45:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.672 13:45:12 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:21.672 13:45:12 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:21.672 13:45:12 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:21.672 13:45:12 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:21.672 13:45:12 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:21.672 13:45:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:21.672 13:45:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:21.672 13:45:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:21.672 13:45:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:21.672 13:45:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:21.672 13:45:12 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:21.672 13:45:12 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:21.672 13:45:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.672 13:45:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:21.672 13:45:12 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:21.672 13:45:12 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:21.672 13:45:12 -- setup/hugepages.sh@78 -- # return 0 00:03:21.672 13:45:12 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:21.672 13:45:12 -- setup/hugepages.sh@187 -- # setup output 00:03:21.672 13:45:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.672 13:45:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:24.964 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:24.964 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:24.964 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:24.964 13:45:15 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:24.964 13:45:15 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:24.964 13:45:15 -- setup/hugepages.sh@89 -- # local node 00:03:24.964 13:45:15 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:24.964 13:45:15 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:24.964 13:45:15 -- setup/hugepages.sh@92 -- # local surp 00:03:24.964 13:45:15 -- setup/hugepages.sh@93 -- # local resv 00:03:24.964 13:45:15 -- setup/hugepages.sh@94 -- # local anon 00:03:24.964 13:45:15 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:24.964 13:45:15 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:24.964 13:45:15 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:24.965 13:45:15 -- setup/common.sh@18 -- # local node= 00:03:24.965 13:45:15 -- setup/common.sh@19 -- # local var val 00:03:24.965 13:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.965 13:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.965 13:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.965 13:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.965 13:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.965 13:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169649976 kB' 'MemAvailable: 172879660 kB' 'Buffers: 3896 kB' 'Cached: 14587204 kB' 'SwapCached: 0 kB' 'Active: 11426992 kB' 'Inactive: 3694072 kB' 'Active(anon): 11009036 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532820 kB' 'Mapped: 181644 kB' 'Shmem: 10479072 kB' 'KReclaimable: 523744 kB' 'Slab: 1166672 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642928 kB' 'KernelStack: 20560 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12546836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.965 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.965 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:24.966 13:45:15 -- setup/common.sh@33 -- # echo 0 00:03:24.966 13:45:15 -- setup/common.sh@33 -- # return 0 00:03:24.966 13:45:15 -- setup/hugepages.sh@97 -- # anon=0 00:03:24.966 13:45:15 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:24.966 13:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.966 13:45:15 -- setup/common.sh@18 -- # local node= 00:03:24.966 13:45:15 -- setup/common.sh@19 -- # local var val 00:03:24.966 13:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.966 13:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.966 13:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.966 13:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.966 13:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.966 13:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169651488 kB' 'MemAvailable: 172881172 kB' 'Buffers: 3896 kB' 'Cached: 14587204 kB' 'SwapCached: 0 kB' 'Active: 11426964 kB' 'Inactive: 3694072 kB' 'Active(anon): 11009008 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532832 kB' 'Mapped: 181644 kB' 'Shmem: 10479072 kB' 'KReclaimable: 523744 kB' 'Slab: 1166728 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642984 kB' 'KernelStack: 20640 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12546848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.966 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.966 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.967 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.967 13:45:15 -- setup/common.sh@33 -- # echo 0 00:03:24.967 13:45:15 -- setup/common.sh@33 -- # return 0 00:03:24.967 13:45:15 -- setup/hugepages.sh@99 -- # surp=0 00:03:24.967 13:45:15 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:24.967 13:45:15 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:24.967 13:45:15 -- setup/common.sh@18 -- # local node= 00:03:24.967 13:45:15 -- setup/common.sh@19 -- # local var val 00:03:24.967 13:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.967 13:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.967 13:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.967 13:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.967 13:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.967 13:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.967 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169653076 kB' 'MemAvailable: 172882760 kB' 'Buffers: 3896 kB' 'Cached: 14587220 kB' 'SwapCached: 0 kB' 'Active: 11424772 kB' 'Inactive: 3694072 kB' 'Active(anon): 11006816 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531112 kB' 'Mapped: 181508 kB' 'Shmem: 10479088 kB' 'KReclaimable: 523744 kB' 'Slab: 1166712 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642968 kB' 'KernelStack: 20592 kB' 'PageTables: 8604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12546864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.968 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.968 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:24.969 13:45:15 -- setup/common.sh@33 -- # echo 0 00:03:24.969 13:45:15 -- setup/common.sh@33 -- # return 0 00:03:24.969 13:45:15 -- setup/hugepages.sh@100 -- # resv=0 00:03:24.969 13:45:15 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:24.969 nr_hugepages=1536 00:03:24.969 13:45:15 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:24.969 resv_hugepages=0 00:03:24.969 13:45:15 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:24.969 surplus_hugepages=0 00:03:24.969 13:45:15 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:24.969 anon_hugepages=0 00:03:24.969 13:45:15 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.969 13:45:15 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:24.969 13:45:15 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:24.969 13:45:15 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:24.969 13:45:15 -- setup/common.sh@18 -- # local node= 00:03:24.969 13:45:15 -- setup/common.sh@19 -- # local var val 00:03:24.969 13:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.969 13:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.969 13:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:24.969 13:45:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:24.969 13:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.969 13:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.969 13:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169653300 kB' 'MemAvailable: 172882984 kB' 'Buffers: 3896 kB' 'Cached: 14587244 kB' 'SwapCached: 0 kB' 'Active: 11424436 kB' 'Inactive: 3694072 kB' 'Active(anon): 11006480 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530728 kB' 'Mapped: 181508 kB' 'Shmem: 10479112 kB' 'KReclaimable: 523744 kB' 'Slab: 1166712 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642968 kB' 'KernelStack: 20576 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12546876 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.969 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.969 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.970 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.970 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:24.971 13:45:15 -- setup/common.sh@33 -- # echo 1536 00:03:24.971 13:45:15 -- setup/common.sh@33 -- # return 0 00:03:24.971 13:45:15 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:24.971 13:45:15 -- setup/hugepages.sh@112 -- # get_nodes 00:03:24.971 13:45:15 -- setup/hugepages.sh@27 -- # local node 00:03:24.971 13:45:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.971 13:45:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:24.971 13:45:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:24.971 13:45:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:24.971 13:45:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:24.971 13:45:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:24.971 13:45:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.971 13:45:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.971 13:45:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:24.971 13:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.971 13:45:15 -- setup/common.sh@18 -- # local node=0 00:03:24.971 13:45:15 -- setup/common.sh@19 -- # local var val 00:03:24.971 13:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.971 13:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.971 13:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:24.971 13:45:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:24.971 13:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.971 13:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92479372 kB' 'MemUsed: 5136256 kB' 'SwapCached: 0 kB' 'Active: 2504720 kB' 'Inactive: 216924 kB' 'Active(anon): 2342896 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2558972 kB' 'Mapped: 65944 kB' 'AnonPages: 165940 kB' 'Shmem: 2180224 kB' 'KernelStack: 11336 kB' 'PageTables: 3816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 652852 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 302172 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.971 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.971 13:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@33 -- # echo 0 00:03:24.972 13:45:15 -- setup/common.sh@33 -- # return 0 00:03:24.972 13:45:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.972 13:45:15 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:24.972 13:45:15 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:24.972 13:45:15 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:24.972 13:45:15 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:24.972 13:45:15 -- setup/common.sh@18 -- # local node=1 00:03:24.972 13:45:15 -- setup/common.sh@19 -- # local var val 00:03:24.972 13:45:15 -- setup/common.sh@20 -- # local mem_f mem 00:03:24.972 13:45:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:24.972 13:45:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:24.972 13:45:15 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:24.972 13:45:15 -- setup/common.sh@28 -- # mapfile -t mem 00:03:24.972 13:45:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77173928 kB' 'MemUsed: 16591580 kB' 'SwapCached: 0 kB' 'Active: 8920064 kB' 'Inactive: 3477148 kB' 'Active(anon): 8663932 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477148 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12032168 kB' 'Mapped: 115564 kB' 'AnonPages: 365132 kB' 'Shmem: 8298888 kB' 'KernelStack: 9240 kB' 'PageTables: 4736 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 173064 kB' 'Slab: 513860 kB' 'SReclaimable: 173064 kB' 'SUnreclaim: 340796 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.972 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.972 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # continue 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # IFS=': ' 00:03:24.973 13:45:15 -- setup/common.sh@31 -- # read -r var val _ 00:03:24.973 13:45:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:24.973 13:45:15 -- setup/common.sh@33 -- # echo 0 00:03:24.973 13:45:15 -- setup/common.sh@33 -- # return 0 00:03:24.973 13:45:15 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:24.973 13:45:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.973 13:45:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.973 13:45:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.973 13:45:15 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:24.973 node0=512 expecting 512 00:03:24.973 13:45:15 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:24.973 13:45:15 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:24.973 13:45:15 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:24.973 13:45:15 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:24.973 node1=1024 expecting 1024 00:03:24.973 13:45:15 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:24.973 00:03:24.973 real 0m3.069s 00:03:24.973 user 0m1.271s 00:03:24.973 sys 0m1.873s 00:03:24.973 13:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.973 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:03:24.973 ************************************ 00:03:24.973 END TEST custom_alloc 00:03:24.973 ************************************ 00:03:24.973 13:45:15 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:24.973 13:45:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:24.973 13:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:24.973 13:45:15 -- common/autotest_common.sh@10 -- # set +x 00:03:24.973 ************************************ 00:03:24.973 START TEST no_shrink_alloc 00:03:24.973 ************************************ 00:03:24.973 13:45:15 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:24.973 13:45:15 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:24.973 13:45:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:24.973 13:45:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:24.973 13:45:15 -- setup/hugepages.sh@51 -- # shift 00:03:24.973 13:45:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:24.973 13:45:15 -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.973 13:45:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.973 13:45:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:24.973 13:45:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:24.973 13:45:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:24.973 13:45:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.973 13:45:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:24.973 13:45:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.973 13:45:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.973 13:45:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.973 13:45:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:24.973 13:45:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.973 13:45:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:24.973 13:45:15 -- setup/hugepages.sh@73 -- # return 0 00:03:24.973 13:45:15 -- setup/hugepages.sh@198 -- # setup output 00:03:24.973 13:45:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.973 13:45:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:27.508 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:27.508 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:27.508 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:27.508 13:45:18 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:27.508 13:45:18 -- setup/hugepages.sh@89 -- # local node 00:03:27.508 13:45:18 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:27.508 13:45:18 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:27.508 13:45:18 -- setup/hugepages.sh@92 -- # local surp 00:03:27.508 13:45:18 -- setup/hugepages.sh@93 -- # local resv 00:03:27.508 13:45:18 -- setup/hugepages.sh@94 -- # local anon 00:03:27.508 13:45:18 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:27.508 13:45:18 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:27.508 13:45:18 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:27.508 13:45:18 -- setup/common.sh@18 -- # local node= 00:03:27.508 13:45:18 -- setup/common.sh@19 -- # local var val 00:03:27.508 13:45:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.508 13:45:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.508 13:45:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.508 13:45:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.508 13:45:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.508 13:45:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.508 13:45:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170709392 kB' 'MemAvailable: 173939076 kB' 'Buffers: 3896 kB' 'Cached: 14587320 kB' 'SwapCached: 0 kB' 'Active: 11426552 kB' 'Inactive: 3694072 kB' 'Active(anon): 11008596 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532564 kB' 'Mapped: 181560 kB' 'Shmem: 10479188 kB' 'KReclaimable: 523744 kB' 'Slab: 1166860 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643116 kB' 'KernelStack: 20688 kB' 'PageTables: 9244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317288 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.508 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.508 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.509 13:45:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:27.509 13:45:18 -- setup/common.sh@33 -- # echo 0 00:03:27.509 13:45:18 -- setup/common.sh@33 -- # return 0 00:03:27.509 13:45:18 -- setup/hugepages.sh@97 -- # anon=0 00:03:27.509 13:45:18 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:27.509 13:45:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.509 13:45:18 -- setup/common.sh@18 -- # local node= 00:03:27.509 13:45:18 -- setup/common.sh@19 -- # local var val 00:03:27.509 13:45:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.509 13:45:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.509 13:45:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.509 13:45:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.509 13:45:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.509 13:45:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.509 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170712712 kB' 'MemAvailable: 173942396 kB' 'Buffers: 3896 kB' 'Cached: 14587324 kB' 'SwapCached: 0 kB' 'Active: 11425996 kB' 'Inactive: 3694072 kB' 'Active(anon): 11008040 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532028 kB' 'Mapped: 181552 kB' 'Shmem: 10479192 kB' 'KReclaimable: 523744 kB' 'Slab: 1166856 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643112 kB' 'KernelStack: 20704 kB' 'PageTables: 9248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547356 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.510 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.510 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.511 13:45:18 -- setup/common.sh@33 -- # echo 0 00:03:27.511 13:45:18 -- setup/common.sh@33 -- # return 0 00:03:27.511 13:45:18 -- setup/hugepages.sh@99 -- # surp=0 00:03:27.511 13:45:18 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:27.511 13:45:18 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:27.511 13:45:18 -- setup/common.sh@18 -- # local node= 00:03:27.511 13:45:18 -- setup/common.sh@19 -- # local var val 00:03:27.511 13:45:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.511 13:45:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.511 13:45:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.511 13:45:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.511 13:45:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.511 13:45:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170713188 kB' 'MemAvailable: 173942872 kB' 'Buffers: 3896 kB' 'Cached: 14587332 kB' 'SwapCached: 0 kB' 'Active: 11426320 kB' 'Inactive: 3694072 kB' 'Active(anon): 11008364 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532372 kB' 'Mapped: 181552 kB' 'Shmem: 10479200 kB' 'KReclaimable: 523744 kB' 'Slab: 1166944 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643200 kB' 'KernelStack: 20736 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.511 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.511 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.512 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:27.512 13:45:18 -- setup/common.sh@33 -- # echo 0 00:03:27.512 13:45:18 -- setup/common.sh@33 -- # return 0 00:03:27.512 13:45:18 -- setup/hugepages.sh@100 -- # resv=0 00:03:27.512 13:45:18 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:27.512 nr_hugepages=1024 00:03:27.512 13:45:18 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:27.512 resv_hugepages=0 00:03:27.512 13:45:18 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:27.512 surplus_hugepages=0 00:03:27.512 13:45:18 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:27.512 anon_hugepages=0 00:03:27.512 13:45:18 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.512 13:45:18 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:27.512 13:45:18 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:27.512 13:45:18 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:27.512 13:45:18 -- setup/common.sh@18 -- # local node= 00:03:27.512 13:45:18 -- setup/common.sh@19 -- # local var val 00:03:27.512 13:45:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.512 13:45:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.512 13:45:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:27.512 13:45:18 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:27.512 13:45:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.512 13:45:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.512 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170712936 kB' 'MemAvailable: 173942620 kB' 'Buffers: 3896 kB' 'Cached: 14587348 kB' 'SwapCached: 0 kB' 'Active: 11426016 kB' 'Inactive: 3694072 kB' 'Active(anon): 11008060 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532032 kB' 'Mapped: 181552 kB' 'Shmem: 10479216 kB' 'KReclaimable: 523744 kB' 'Slab: 1166944 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643200 kB' 'KernelStack: 20720 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.513 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.513 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:27.514 13:45:18 -- setup/common.sh@33 -- # echo 1024 00:03:27.514 13:45:18 -- setup/common.sh@33 -- # return 0 00:03:27.514 13:45:18 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:27.514 13:45:18 -- setup/hugepages.sh@112 -- # get_nodes 00:03:27.514 13:45:18 -- setup/hugepages.sh@27 -- # local node 00:03:27.514 13:45:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.514 13:45:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:27.514 13:45:18 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:27.514 13:45:18 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:27.514 13:45:18 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:27.514 13:45:18 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:27.514 13:45:18 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:27.514 13:45:18 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:27.514 13:45:18 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:27.514 13:45:18 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:27.514 13:45:18 -- setup/common.sh@18 -- # local node=0 00:03:27.514 13:45:18 -- setup/common.sh@19 -- # local var val 00:03:27.514 13:45:18 -- setup/common.sh@20 -- # local mem_f mem 00:03:27.514 13:45:18 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:27.514 13:45:18 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:27.514 13:45:18 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:27.514 13:45:18 -- setup/common.sh@28 -- # mapfile -t mem 00:03:27.514 13:45:18 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91432688 kB' 'MemUsed: 6182940 kB' 'SwapCached: 0 kB' 'Active: 2505020 kB' 'Inactive: 216924 kB' 'Active(anon): 2343196 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2559080 kB' 'Mapped: 65948 kB' 'AnonPages: 166036 kB' 'Shmem: 2180332 kB' 'KernelStack: 11448 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 653224 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 302544 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.514 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.514 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # continue 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # IFS=': ' 00:03:27.515 13:45:18 -- setup/common.sh@31 -- # read -r var val _ 00:03:27.515 13:45:18 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:27.515 13:45:18 -- setup/common.sh@33 -- # echo 0 00:03:27.515 13:45:18 -- setup/common.sh@33 -- # return 0 00:03:27.515 13:45:18 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:27.515 13:45:18 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:27.515 13:45:18 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:27.515 13:45:18 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:27.515 13:45:18 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:27.515 node0=1024 expecting 1024 00:03:27.515 13:45:18 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:27.515 13:45:18 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:27.515 13:45:18 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:27.515 13:45:18 -- setup/hugepages.sh@202 -- # setup output 00:03:27.515 13:45:18 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.515 13:45:18 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:30.052 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:30.052 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:30.052 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:30.052 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:30.052 13:45:20 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:30.052 13:45:20 -- setup/hugepages.sh@89 -- # local node 00:03:30.052 13:45:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:30.052 13:45:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:30.052 13:45:20 -- setup/hugepages.sh@92 -- # local surp 00:03:30.052 13:45:20 -- setup/hugepages.sh@93 -- # local resv 00:03:30.052 13:45:20 -- setup/hugepages.sh@94 -- # local anon 00:03:30.052 13:45:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:30.052 13:45:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:30.052 13:45:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:30.052 13:45:20 -- setup/common.sh@18 -- # local node= 00:03:30.052 13:45:20 -- setup/common.sh@19 -- # local var val 00:03:30.052 13:45:20 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.052 13:45:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.052 13:45:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.052 13:45:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.052 13:45:20 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.052 13:45:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.052 13:45:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170707428 kB' 'MemAvailable: 173937112 kB' 'Buffers: 3896 kB' 'Cached: 14587428 kB' 'SwapCached: 0 kB' 'Active: 11425240 kB' 'Inactive: 3694072 kB' 'Active(anon): 11007284 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531284 kB' 'Mapped: 181580 kB' 'Shmem: 10479296 kB' 'KReclaimable: 523744 kB' 'Slab: 1166768 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643024 kB' 'KernelStack: 20608 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:20 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:20 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.052 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.052 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:30.053 13:45:21 -- setup/common.sh@33 -- # echo 0 00:03:30.053 13:45:21 -- setup/common.sh@33 -- # return 0 00:03:30.053 13:45:21 -- setup/hugepages.sh@97 -- # anon=0 00:03:30.053 13:45:21 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:30.053 13:45:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.053 13:45:21 -- setup/common.sh@18 -- # local node= 00:03:30.053 13:45:21 -- setup/common.sh@19 -- # local var val 00:03:30.053 13:45:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.053 13:45:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.053 13:45:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.053 13:45:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.053 13:45:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.053 13:45:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170713556 kB' 'MemAvailable: 173943240 kB' 'Buffers: 3896 kB' 'Cached: 14587432 kB' 'SwapCached: 0 kB' 'Active: 11425468 kB' 'Inactive: 3694072 kB' 'Active(anon): 11007512 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531532 kB' 'Mapped: 181580 kB' 'Shmem: 10479300 kB' 'KReclaimable: 523744 kB' 'Slab: 1166716 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 642972 kB' 'KernelStack: 20576 kB' 'PageTables: 8568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.053 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.053 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.054 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.054 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.055 13:45:21 -- setup/common.sh@33 -- # echo 0 00:03:30.055 13:45:21 -- setup/common.sh@33 -- # return 0 00:03:30.055 13:45:21 -- setup/hugepages.sh@99 -- # surp=0 00:03:30.055 13:45:21 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:30.055 13:45:21 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:30.055 13:45:21 -- setup/common.sh@18 -- # local node= 00:03:30.055 13:45:21 -- setup/common.sh@19 -- # local var val 00:03:30.055 13:45:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.055 13:45:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.055 13:45:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.055 13:45:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.055 13:45:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.055 13:45:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170713884 kB' 'MemAvailable: 173943568 kB' 'Buffers: 3896 kB' 'Cached: 14587444 kB' 'SwapCached: 0 kB' 'Active: 11424928 kB' 'Inactive: 3694072 kB' 'Active(anon): 11006972 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531020 kB' 'Mapped: 181532 kB' 'Shmem: 10479312 kB' 'KReclaimable: 523744 kB' 'Slab: 1166756 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643012 kB' 'KernelStack: 20608 kB' 'PageTables: 8672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.055 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.055 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.315 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.315 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.315 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.315 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.315 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.316 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.316 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:30.317 13:45:21 -- setup/common.sh@33 -- # echo 0 00:03:30.317 13:45:21 -- setup/common.sh@33 -- # return 0 00:03:30.317 13:45:21 -- setup/hugepages.sh@100 -- # resv=0 00:03:30.317 13:45:21 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:30.317 nr_hugepages=1024 00:03:30.317 13:45:21 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:30.317 resv_hugepages=0 00:03:30.317 13:45:21 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:30.317 surplus_hugepages=0 00:03:30.317 13:45:21 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:30.317 anon_hugepages=0 00:03:30.317 13:45:21 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.317 13:45:21 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:30.317 13:45:21 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:30.317 13:45:21 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:30.317 13:45:21 -- setup/common.sh@18 -- # local node= 00:03:30.317 13:45:21 -- setup/common.sh@19 -- # local var val 00:03:30.317 13:45:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.317 13:45:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.317 13:45:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:30.317 13:45:21 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:30.317 13:45:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.317 13:45:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170714252 kB' 'MemAvailable: 173943936 kB' 'Buffers: 3896 kB' 'Cached: 14587468 kB' 'SwapCached: 0 kB' 'Active: 11424608 kB' 'Inactive: 3694072 kB' 'Active(anon): 11006652 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694072 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530648 kB' 'Mapped: 181532 kB' 'Shmem: 10479336 kB' 'KReclaimable: 523744 kB' 'Slab: 1166756 kB' 'SReclaimable: 523744 kB' 'SUnreclaim: 643012 kB' 'KernelStack: 20592 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12547552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 111744 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3355604 kB' 'DirectMap2M: 27781120 kB' 'DirectMap1G: 170917888 kB' 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.317 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.317 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:30.318 13:45:21 -- setup/common.sh@33 -- # echo 1024 00:03:30.318 13:45:21 -- setup/common.sh@33 -- # return 0 00:03:30.318 13:45:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:30.318 13:45:21 -- setup/hugepages.sh@112 -- # get_nodes 00:03:30.318 13:45:21 -- setup/hugepages.sh@27 -- # local node 00:03:30.318 13:45:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.318 13:45:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:30.318 13:45:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:30.318 13:45:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:30.318 13:45:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:30.318 13:45:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:30.318 13:45:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:30.318 13:45:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:30.318 13:45:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:30.318 13:45:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:30.318 13:45:21 -- setup/common.sh@18 -- # local node=0 00:03:30.318 13:45:21 -- setup/common.sh@19 -- # local var val 00:03:30.318 13:45:21 -- setup/common.sh@20 -- # local mem_f mem 00:03:30.318 13:45:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:30.318 13:45:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:30.318 13:45:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:30.318 13:45:21 -- setup/common.sh@28 -- # mapfile -t mem 00:03:30.318 13:45:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:30.318 13:45:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91427940 kB' 'MemUsed: 6187688 kB' 'SwapCached: 0 kB' 'Active: 2503616 kB' 'Inactive: 216924 kB' 'Active(anon): 2341792 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2559152 kB' 'Mapped: 65944 kB' 'AnonPages: 164576 kB' 'Shmem: 2180404 kB' 'KernelStack: 11320 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 350680 kB' 'Slab: 652880 kB' 'SReclaimable: 350680 kB' 'SUnreclaim: 302200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.318 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.318 13:45:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # continue 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # IFS=': ' 00:03:30.319 13:45:21 -- setup/common.sh@31 -- # read -r var val _ 00:03:30.319 13:45:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:30.319 13:45:21 -- setup/common.sh@33 -- # echo 0 00:03:30.319 13:45:21 -- setup/common.sh@33 -- # return 0 00:03:30.319 13:45:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:30.319 13:45:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:30.319 13:45:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:30.319 13:45:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:30.319 13:45:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:30.319 node0=1024 expecting 1024 00:03:30.319 13:45:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:30.319 00:03:30.319 real 0m5.567s 00:03:30.319 user 0m2.257s 00:03:30.319 sys 0m3.407s 00:03:30.319 13:45:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.319 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:03:30.319 ************************************ 00:03:30.319 END TEST no_shrink_alloc 00:03:30.319 ************************************ 00:03:30.319 13:45:21 -- setup/hugepages.sh@217 -- # clear_hp 00:03:30.319 13:45:21 -- setup/hugepages.sh@37 -- # local node hp 00:03:30.319 13:45:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.319 13:45:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.319 13:45:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.319 13:45:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.319 13:45:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.319 13:45:21 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:30.319 13:45:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.319 13:45:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.319 13:45:21 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:30.319 13:45:21 -- setup/hugepages.sh@41 -- # echo 0 00:03:30.319 13:45:21 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:30.319 13:45:21 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:30.319 00:03:30.319 real 0m21.003s 00:03:30.319 user 0m8.007s 00:03:30.319 sys 0m12.213s 00:03:30.319 13:45:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:30.319 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:03:30.319 ************************************ 00:03:30.319 END TEST hugepages 00:03:30.319 ************************************ 00:03:30.319 13:45:21 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.319 13:45:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:30.319 13:45:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:30.319 13:45:21 -- common/autotest_common.sh@10 -- # set +x 00:03:30.319 ************************************ 00:03:30.319 START TEST driver 00:03:30.319 ************************************ 00:03:30.319 13:45:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:30.319 * Looking for test storage... 00:03:30.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.319 13:45:21 -- setup/driver.sh@68 -- # setup reset 00:03:30.319 13:45:21 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.319 13:45:21 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.505 13:45:24 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:34.505 13:45:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.505 13:45:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.505 13:45:24 -- common/autotest_common.sh@10 -- # set +x 00:03:34.505 ************************************ 00:03:34.505 START TEST guess_driver 00:03:34.505 ************************************ 00:03:34.505 13:45:24 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:34.505 13:45:24 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:34.505 13:45:24 -- setup/driver.sh@47 -- # local fail=0 00:03:34.505 13:45:24 -- setup/driver.sh@49 -- # pick_driver 00:03:34.505 13:45:24 -- setup/driver.sh@36 -- # vfio 00:03:34.505 13:45:24 -- setup/driver.sh@21 -- # local iommu_grups 00:03:34.505 13:45:24 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:34.505 13:45:24 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:34.505 13:45:24 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:34.505 13:45:24 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:34.505 13:45:24 -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:34.505 13:45:24 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:34.505 13:45:24 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:34.505 13:45:24 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:34.505 13:45:24 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:34.505 13:45:24 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:34.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:34.505 13:45:24 -- setup/driver.sh@30 -- # return 0 00:03:34.505 13:45:24 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:34.505 13:45:25 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:34.505 13:45:25 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:34.505 13:45:25 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:34.505 Looking for driver=vfio-pci 00:03:34.505 13:45:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:34.505 13:45:25 -- setup/driver.sh@45 -- # setup output config 00:03:34.505 13:45:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.505 13:45:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:37.068 13:45:27 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:37.068 13:45:27 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:37.068 13:45:27 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.004 13:45:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:38.004 13:45:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:38.004 13:45:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:38.004 13:45:28 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:38.004 13:45:28 -- setup/driver.sh@65 -- # setup reset 00:03:38.004 13:45:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:38.004 13:45:28 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:42.186 00:03:42.186 real 0m7.624s 00:03:42.186 user 0m2.137s 00:03:42.186 sys 0m3.896s 00:03:42.186 13:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.186 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:42.186 ************************************ 00:03:42.186 END TEST guess_driver 00:03:42.186 ************************************ 00:03:42.186 00:03:42.186 real 0m11.411s 00:03:42.186 user 0m3.114s 00:03:42.186 sys 0m5.878s 00:03:42.186 13:45:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.186 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:42.186 ************************************ 00:03:42.186 END TEST driver 00:03:42.186 ************************************ 00:03:42.186 13:45:32 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.186 13:45:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.186 13:45:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.186 13:45:32 -- common/autotest_common.sh@10 -- # set +x 00:03:42.186 ************************************ 00:03:42.186 START TEST devices 00:03:42.186 ************************************ 00:03:42.186 13:45:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:42.186 * Looking for test storage... 00:03:42.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:42.186 13:45:32 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:42.186 13:45:32 -- setup/devices.sh@192 -- # setup reset 00:03:42.186 13:45:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:42.186 13:45:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.722 13:45:35 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:44.722 13:45:35 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:44.722 13:45:35 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:44.722 13:45:35 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:44.722 13:45:35 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:44.722 13:45:35 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:44.722 13:45:35 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:44.722 13:45:35 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:44.722 13:45:35 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:44.722 13:45:35 -- setup/devices.sh@196 -- # blocks=() 00:03:44.722 13:45:35 -- setup/devices.sh@196 -- # declare -a blocks 00:03:44.722 13:45:35 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:44.722 13:45:35 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:44.722 13:45:35 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:44.722 13:45:35 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:44.722 13:45:35 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:44.722 13:45:35 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:44.722 13:45:35 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:44.722 13:45:35 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:44.722 13:45:35 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:44.722 13:45:35 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:44.722 13:45:35 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:44.722 No valid GPT data, bailing 00:03:44.722 13:45:35 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:44.722 13:45:35 -- scripts/common.sh@393 -- # pt= 00:03:44.722 13:45:35 -- scripts/common.sh@394 -- # return 1 00:03:44.722 13:45:35 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:44.722 13:45:35 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:44.722 13:45:35 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:44.722 13:45:35 -- setup/common.sh@80 -- # echo 1000204886016 00:03:44.722 13:45:35 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:44.722 13:45:35 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:44.722 13:45:35 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:44.722 13:45:35 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:44.722 13:45:35 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:44.722 13:45:35 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:44.722 13:45:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.722 13:45:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.722 13:45:35 -- common/autotest_common.sh@10 -- # set +x 00:03:44.722 ************************************ 00:03:44.722 START TEST nvme_mount 00:03:44.722 ************************************ 00:03:44.722 13:45:35 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:44.722 13:45:35 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:44.722 13:45:35 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:44.722 13:45:35 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:44.722 13:45:35 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:44.722 13:45:35 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:44.722 13:45:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:44.722 13:45:35 -- setup/common.sh@40 -- # local part_no=1 00:03:44.722 13:45:35 -- setup/common.sh@41 -- # local size=1073741824 00:03:44.722 13:45:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:44.722 13:45:35 -- setup/common.sh@44 -- # parts=() 00:03:44.722 13:45:35 -- setup/common.sh@44 -- # local parts 00:03:44.722 13:45:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:44.722 13:45:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.722 13:45:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:44.722 13:45:35 -- setup/common.sh@46 -- # (( part++ )) 00:03:44.722 13:45:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:44.722 13:45:35 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:44.722 13:45:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:44.722 13:45:35 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:45.661 Creating new GPT entries in memory. 00:03:45.661 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:45.661 other utilities. 00:03:45.661 13:45:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:45.661 13:45:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:45.661 13:45:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:45.661 13:45:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:45.661 13:45:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:46.599 Creating new GPT entries in memory. 00:03:46.599 The operation has completed successfully. 00:03:46.599 13:45:37 -- setup/common.sh@57 -- # (( part++ )) 00:03:46.599 13:45:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:46.599 13:45:37 -- setup/common.sh@62 -- # wait 3060834 00:03:46.599 13:45:37 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.599 13:45:37 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:46.599 13:45:37 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.599 13:45:37 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:46.599 13:45:37 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:46.599 13:45:37 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.599 13:45:37 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.599 13:45:37 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:46.599 13:45:37 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:46.599 13:45:37 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:46.599 13:45:37 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:46.599 13:45:37 -- setup/devices.sh@53 -- # local found=0 00:03:46.599 13:45:37 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:46.599 13:45:37 -- setup/devices.sh@56 -- # : 00:03:46.599 13:45:37 -- setup/devices.sh@59 -- # local pci status 00:03:46.599 13:45:37 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:46.599 13:45:37 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:46.599 13:45:37 -- setup/devices.sh@47 -- # setup output config 00:03:46.599 13:45:37 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.599 13:45:37 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:49.132 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.132 13:45:40 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:49.132 13:45:40 -- setup/devices.sh@63 -- # found=1 00:03:49.132 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.391 13:45:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:49.391 13:45:40 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:49.391 13:45:40 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.391 13:45:40 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.391 13:45:40 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.391 13:45:40 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:49.391 13:45:40 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.391 13:45:40 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.391 13:45:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:49.391 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:49.391 13:45:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:49.391 13:45:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:49.650 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:49.650 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:49.650 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:49.650 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:49.650 13:45:40 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:49.650 13:45:40 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:49.650 13:45:40 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.650 13:45:40 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:49.650 13:45:40 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:49.650 13:45:40 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.650 13:45:40 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.650 13:45:40 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:49.650 13:45:40 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:49.650 13:45:40 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:49.650 13:45:40 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:49.650 13:45:40 -- setup/devices.sh@53 -- # local found=0 00:03:49.650 13:45:40 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:49.650 13:45:40 -- setup/devices.sh@56 -- # : 00:03:49.650 13:45:40 -- setup/devices.sh@59 -- # local pci status 00:03:49.650 13:45:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:49.650 13:45:40 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:49.650 13:45:40 -- setup/devices.sh@47 -- # setup output config 00:03:49.650 13:45:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.650 13:45:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:52.936 13:45:43 -- setup/devices.sh@63 -- # found=1 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:52.936 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.936 13:45:43 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:52.936 13:45:43 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:52.936 13:45:43 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.936 13:45:43 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:52.936 13:45:43 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:52.936 13:45:43 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:52.936 13:45:43 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:03:52.936 13:45:43 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:52.937 13:45:43 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:52.937 13:45:43 -- setup/devices.sh@50 -- # local mount_point= 00:03:52.937 13:45:43 -- setup/devices.sh@51 -- # local test_file= 00:03:52.937 13:45:43 -- setup/devices.sh@53 -- # local found=0 00:03:52.937 13:45:43 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:52.937 13:45:43 -- setup/devices.sh@59 -- # local pci status 00:03:52.937 13:45:43 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:52.937 13:45:43 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:52.937 13:45:43 -- setup/devices.sh@47 -- # setup output config 00:03:52.937 13:45:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.937 13:45:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:55.472 13:45:46 -- setup/devices.sh@63 -- # found=1 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:55.472 13:45:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:55.472 13:45:46 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:55.472 13:45:46 -- setup/devices.sh@68 -- # return 0 00:03:55.472 13:45:46 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:55.472 13:45:46 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.472 13:45:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:55.472 13:45:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:55.472 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:55.472 00:03:55.472 real 0m10.757s 00:03:55.472 user 0m3.227s 00:03:55.472 sys 0m5.377s 00:03:55.472 13:45:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.472 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:03:55.472 ************************************ 00:03:55.472 END TEST nvme_mount 00:03:55.472 ************************************ 00:03:55.472 13:45:46 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:55.472 13:45:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:55.472 13:45:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:55.472 13:45:46 -- common/autotest_common.sh@10 -- # set +x 00:03:55.472 ************************************ 00:03:55.472 START TEST dm_mount 00:03:55.472 ************************************ 00:03:55.472 13:45:46 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:55.472 13:45:46 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:55.472 13:45:46 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:55.472 13:45:46 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:55.472 13:45:46 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:55.472 13:45:46 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.472 13:45:46 -- setup/common.sh@40 -- # local part_no=2 00:03:55.472 13:45:46 -- setup/common.sh@41 -- # local size=1073741824 00:03:55.472 13:45:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.472 13:45:46 -- setup/common.sh@44 -- # parts=() 00:03:55.472 13:45:46 -- setup/common.sh@44 -- # local parts 00:03:55.472 13:45:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.472 13:45:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.472 13:45:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.472 13:45:46 -- setup/common.sh@46 -- # (( part++ )) 00:03:55.472 13:45:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.472 13:45:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.472 13:45:46 -- setup/common.sh@46 -- # (( part++ )) 00:03:55.472 13:45:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.472 13:45:46 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:55.472 13:45:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.472 13:45:46 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:56.408 Creating new GPT entries in memory. 00:03:56.408 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.408 other utilities. 00:03:56.408 13:45:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.408 13:45:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.408 13:45:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.409 13:45:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.409 13:45:47 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:57.346 Creating new GPT entries in memory. 00:03:57.346 The operation has completed successfully. 00:03:57.346 13:45:48 -- setup/common.sh@57 -- # (( part++ )) 00:03:57.346 13:45:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.346 13:45:48 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:57.346 13:45:48 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:57.346 13:45:48 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:58.723 The operation has completed successfully. 00:03:58.723 13:45:49 -- setup/common.sh@57 -- # (( part++ )) 00:03:58.723 13:45:49 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:58.723 13:45:49 -- setup/common.sh@62 -- # wait 3065101 00:03:58.723 13:45:49 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:58.723 13:45:49 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.723 13:45:49 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.723 13:45:49 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:58.723 13:45:49 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:58.723 13:45:49 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.723 13:45:49 -- setup/devices.sh@161 -- # break 00:03:58.723 13:45:49 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.723 13:45:49 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:58.723 13:45:49 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:03:58.723 13:45:49 -- setup/devices.sh@166 -- # dm=dm-2 00:03:58.723 13:45:49 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:03:58.723 13:45:49 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:03:58.723 13:45:49 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.723 13:45:49 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:58.723 13:45:49 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.723 13:45:49 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:58.723 13:45:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:58.723 13:45:49 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.723 13:45:49 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.723 13:45:49 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:58.723 13:45:49 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:58.723 13:45:49 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:58.723 13:45:49 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:58.723 13:45:49 -- setup/devices.sh@53 -- # local found=0 00:03:58.723 13:45:49 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:58.723 13:45:49 -- setup/devices.sh@56 -- # : 00:03:58.723 13:45:49 -- setup/devices.sh@59 -- # local pci status 00:03:58.723 13:45:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:58.723 13:45:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:58.723 13:45:49 -- setup/devices.sh@47 -- # setup output config 00:03:58.723 13:45:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.723 13:45:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:01.291 13:45:52 -- setup/devices.sh@63 -- # found=1 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:01.291 13:45:52 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:01.291 13:45:52 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:01.291 13:45:52 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:01.291 13:45:52 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:01.291 13:45:52 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:01.291 13:45:52 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:01.291 13:45:52 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:01.291 13:45:52 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:01.291 13:45:52 -- setup/devices.sh@50 -- # local mount_point= 00:04:01.291 13:45:52 -- setup/devices.sh@51 -- # local test_file= 00:04:01.291 13:45:52 -- setup/devices.sh@53 -- # local found=0 00:04:01.291 13:45:52 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:01.291 13:45:52 -- setup/devices.sh@59 -- # local pci status 00:04:01.291 13:45:52 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:01.291 13:45:52 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:01.291 13:45:52 -- setup/devices.sh@47 -- # setup output config 00:04:01.291 13:45:52 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.291 13:45:52 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:03.824 13:45:54 -- setup/devices.sh@63 -- # found=1 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:03.824 13:45:54 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:03.824 13:45:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:04.082 13:45:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:04.082 13:45:54 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:04.082 13:45:54 -- setup/devices.sh@68 -- # return 0 00:04:04.082 13:45:54 -- setup/devices.sh@187 -- # cleanup_dm 00:04:04.082 13:45:54 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.082 13:45:54 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:04.082 13:45:54 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:04.082 13:45:54 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.082 13:45:54 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:04.082 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:04.082 13:45:55 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:04.083 13:45:55 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:04.083 00:04:04.083 real 0m8.742s 00:04:04.083 user 0m2.128s 00:04:04.083 sys 0m3.652s 00:04:04.083 13:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.083 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.083 ************************************ 00:04:04.083 END TEST dm_mount 00:04:04.083 ************************************ 00:04:04.083 13:45:55 -- setup/devices.sh@1 -- # cleanup 00:04:04.083 13:45:55 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:04.083 13:45:55 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:04.083 13:45:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.083 13:45:55 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:04.083 13:45:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.083 13:45:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:04.340 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:04.340 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:04.340 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:04.340 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:04.340 13:45:55 -- setup/devices.sh@12 -- # cleanup_dm 00:04:04.340 13:45:55 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:04.340 13:45:55 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:04.340 13:45:55 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:04.340 13:45:55 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:04.340 13:45:55 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:04.340 13:45:55 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:04.340 00:04:04.340 real 0m22.649s 00:04:04.340 user 0m6.306s 00:04:04.340 sys 0m10.915s 00:04:04.340 13:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.340 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.340 ************************************ 00:04:04.340 END TEST devices 00:04:04.340 ************************************ 00:04:04.599 00:04:04.599 real 1m14.369s 00:04:04.599 user 0m23.737s 00:04:04.599 sys 0m40.681s 00:04:04.599 13:45:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.599 13:45:55 -- common/autotest_common.sh@10 -- # set +x 00:04:04.599 ************************************ 00:04:04.599 END TEST setup.sh 00:04:04.599 ************************************ 00:04:04.599 13:45:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:07.267 Hugepages 00:04:07.267 node hugesize free / total 00:04:07.267 node0 1048576kB 0 / 0 00:04:07.267 node0 2048kB 2048 / 2048 00:04:07.267 node1 1048576kB 0 / 0 00:04:07.267 node1 2048kB 0 / 0 00:04:07.267 00:04:07.267 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:07.267 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:07.267 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:07.267 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:07.268 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:07.268 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:07.268 13:45:58 -- spdk/autotest.sh@141 -- # uname -s 00:04:07.268 13:45:58 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:07.268 13:45:58 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:07.268 13:45:58 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:09.802 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:09.802 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:10.738 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:10.738 13:46:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:11.674 13:46:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:11.674 13:46:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:11.674 13:46:02 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.674 13:46:02 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:11.674 13:46:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:11.674 13:46:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:11.674 13:46:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.674 13:46:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:11.674 13:46:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:11.674 13:46:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:11.674 13:46:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:11.674 13:46:02 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:14.204 Waiting for block devices as requested 00:04:14.463 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:14.463 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:14.463 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:14.723 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:14.723 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:14.723 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:14.723 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:14.981 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:14.981 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:14.981 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:14.981 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:15.240 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:15.240 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:15.240 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:15.498 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:15.498 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:15.498 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:15.498 13:46:06 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:15.498 13:46:06 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:15.498 13:46:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:15.498 13:46:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:15.498 13:46:06 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:15.498 13:46:06 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:15.498 13:46:06 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:15.498 13:46:06 -- common/autotest_common.sh@1530 -- # oacs=' 0xe' 00:04:15.498 13:46:06 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:15.498 13:46:06 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:15.498 13:46:06 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:15.756 13:46:06 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:15.756 13:46:06 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:15.756 13:46:06 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:15.756 13:46:06 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:15.756 13:46:06 -- common/autotest_common.sh@1542 -- # continue 00:04:15.756 13:46:06 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:15.756 13:46:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:15.756 13:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:15.756 13:46:06 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:15.756 13:46:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:15.756 13:46:06 -- common/autotest_common.sh@10 -- # set +x 00:04:15.756 13:46:06 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:18.285 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.285 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.285 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:18.286 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.311 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.311 13:46:10 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:19.311 13:46:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:19.312 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:04:19.312 13:46:10 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:19.312 13:46:10 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:19.312 13:46:10 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.312 13:46:10 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:19.312 13:46:10 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:19.312 13:46:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:19.312 13:46:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:19.312 13:46:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:19.312 13:46:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.312 13:46:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:19.312 13:46:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:19.312 13:46:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:19.312 13:46:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:19.312 13:46:10 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:19.312 13:46:10 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:19.312 13:46:10 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:04:19.312 13:46:10 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:19.312 13:46:10 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:04:19.312 13:46:10 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:5e:00.0 00:04:19.312 13:46:10 -- common/autotest_common.sh@1577 -- # [[ -z 0000:5e:00.0 ]] 00:04:19.312 13:46:10 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=3074017 00:04:19.312 13:46:10 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:19.312 13:46:10 -- common/autotest_common.sh@1583 -- # waitforlisten 3074017 00:04:19.312 13:46:10 -- common/autotest_common.sh@819 -- # '[' -z 3074017 ']' 00:04:19.312 13:46:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.312 13:46:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:19.312 13:46:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.312 13:46:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:19.312 13:46:10 -- common/autotest_common.sh@10 -- # set +x 00:04:19.312 [2024-07-23 13:46:10.291752] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:19.312 [2024-07-23 13:46:10.291798] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3074017 ] 00:04:19.570 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.570 [2024-07-23 13:46:10.343593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.570 [2024-07-23 13:46:10.420852] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:19.570 [2024-07-23 13:46:10.420963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.137 13:46:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:20.137 13:46:11 -- common/autotest_common.sh@852 -- # return 0 00:04:20.137 13:46:11 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:04:20.137 13:46:11 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:04:20.137 13:46:11 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:23.421 nvme0n1 00:04:23.421 13:46:14 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:23.421 [2024-07-23 13:46:14.243250] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:23.421 request: 00:04:23.421 { 00:04:23.421 "nvme_ctrlr_name": "nvme0", 00:04:23.421 "password": "test", 00:04:23.421 "method": "bdev_nvme_opal_revert", 00:04:23.421 "req_id": 1 00:04:23.421 } 00:04:23.421 Got JSON-RPC error response 00:04:23.421 response: 00:04:23.421 { 00:04:23.421 "code": -32602, 00:04:23.421 "message": "Invalid parameters" 00:04:23.421 } 00:04:23.421 13:46:14 -- common/autotest_common.sh@1589 -- # true 00:04:23.421 13:46:14 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:04:23.421 13:46:14 -- common/autotest_common.sh@1593 -- # killprocess 3074017 00:04:23.421 13:46:14 -- common/autotest_common.sh@926 -- # '[' -z 3074017 ']' 00:04:23.421 13:46:14 -- common/autotest_common.sh@930 -- # kill -0 3074017 00:04:23.421 13:46:14 -- common/autotest_common.sh@931 -- # uname 00:04:23.421 13:46:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:23.421 13:46:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3074017 00:04:23.421 13:46:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:23.421 13:46:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:23.421 13:46:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3074017' 00:04:23.421 killing process with pid 3074017 00:04:23.421 13:46:14 -- common/autotest_common.sh@945 -- # kill 3074017 00:04:23.421 13:46:14 -- common/autotest_common.sh@950 -- # wait 3074017 00:04:25.320 13:46:15 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:25.320 13:46:15 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:25.320 13:46:15 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:25.320 13:46:15 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:04:25.320 13:46:15 -- spdk/autotest.sh@173 -- # timing_enter lib 00:04:25.320 13:46:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:25.320 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 13:46:15 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.320 13:46:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.320 13:46:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.320 13:46:15 -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 ************************************ 00:04:25.320 START TEST env 00:04:25.320 ************************************ 00:04:25.320 13:46:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:25.320 * Looking for test storage... 00:04:25.320 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:25.320 13:46:16 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.320 13:46:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.320 13:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.320 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 ************************************ 00:04:25.320 START TEST env_memory 00:04:25.320 ************************************ 00:04:25.320 13:46:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:25.320 00:04:25.320 00:04:25.320 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.320 http://cunit.sourceforge.net/ 00:04:25.320 00:04:25.320 00:04:25.320 Suite: memory 00:04:25.320 Test: alloc and free memory map ...[2024-07-23 13:46:16.058001] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:25.320 passed 00:04:25.320 Test: mem map translation ...[2024-07-23 13:46:16.076242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:25.320 [2024-07-23 13:46:16.076255] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:25.320 [2024-07-23 13:46:16.076289] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:25.320 [2024-07-23 13:46:16.076295] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:25.320 passed 00:04:25.320 Test: mem map registration ...[2024-07-23 13:46:16.113188] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:25.320 [2024-07-23 13:46:16.113203] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:25.320 passed 00:04:25.320 Test: mem map adjacent registrations ...passed 00:04:25.320 00:04:25.320 Run Summary: Type Total Ran Passed Failed Inactive 00:04:25.320 suites 1 1 n/a 0 0 00:04:25.320 tests 4 4 4 0 0 00:04:25.320 asserts 152 152 152 0 n/a 00:04:25.320 00:04:25.320 Elapsed time = 0.138 seconds 00:04:25.320 00:04:25.320 real 0m0.150s 00:04:25.320 user 0m0.139s 00:04:25.320 sys 0m0.011s 00:04:25.320 13:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.320 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 ************************************ 00:04:25.320 END TEST env_memory 00:04:25.320 ************************************ 00:04:25.320 13:46:16 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.320 13:46:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:25.320 13:46:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:25.320 13:46:16 -- common/autotest_common.sh@10 -- # set +x 00:04:25.320 ************************************ 00:04:25.320 START TEST env_vtophys 00:04:25.320 ************************************ 00:04:25.320 13:46:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:25.320 EAL: lib.eal log level changed from notice to debug 00:04:25.320 EAL: Detected lcore 0 as core 0 on socket 0 00:04:25.320 EAL: Detected lcore 1 as core 1 on socket 0 00:04:25.320 EAL: Detected lcore 2 as core 2 on socket 0 00:04:25.320 EAL: Detected lcore 3 as core 3 on socket 0 00:04:25.320 EAL: Detected lcore 4 as core 4 on socket 0 00:04:25.320 EAL: Detected lcore 5 as core 5 on socket 0 00:04:25.320 EAL: Detected lcore 6 as core 6 on socket 0 00:04:25.320 EAL: Detected lcore 7 as core 8 on socket 0 00:04:25.320 EAL: Detected lcore 8 as core 9 on socket 0 00:04:25.320 EAL: Detected lcore 9 as core 10 on socket 0 00:04:25.320 EAL: Detected lcore 10 as core 11 on socket 0 00:04:25.320 EAL: Detected lcore 11 as core 12 on socket 0 00:04:25.320 EAL: Detected lcore 12 as core 13 on socket 0 00:04:25.320 EAL: Detected lcore 13 as core 16 on socket 0 00:04:25.320 EAL: Detected lcore 14 as core 17 on socket 0 00:04:25.320 EAL: Detected lcore 15 as core 18 on socket 0 00:04:25.320 EAL: Detected lcore 16 as core 19 on socket 0 00:04:25.320 EAL: Detected lcore 17 as core 20 on socket 0 00:04:25.320 EAL: Detected lcore 18 as core 21 on socket 0 00:04:25.320 EAL: Detected lcore 19 as core 25 on socket 0 00:04:25.320 EAL: Detected lcore 20 as core 26 on socket 0 00:04:25.320 EAL: Detected lcore 21 as core 27 on socket 0 00:04:25.320 EAL: Detected lcore 22 as core 28 on socket 0 00:04:25.320 EAL: Detected lcore 23 as core 29 on socket 0 00:04:25.320 EAL: Detected lcore 24 as core 0 on socket 1 00:04:25.320 EAL: Detected lcore 25 as core 1 on socket 1 00:04:25.320 EAL: Detected lcore 26 as core 2 on socket 1 00:04:25.320 EAL: Detected lcore 27 as core 3 on socket 1 00:04:25.320 EAL: Detected lcore 28 as core 4 on socket 1 00:04:25.320 EAL: Detected lcore 29 as core 5 on socket 1 00:04:25.320 EAL: Detected lcore 30 as core 6 on socket 1 00:04:25.320 EAL: Detected lcore 31 as core 9 on socket 1 00:04:25.320 EAL: Detected lcore 32 as core 10 on socket 1 00:04:25.320 EAL: Detected lcore 33 as core 11 on socket 1 00:04:25.320 EAL: Detected lcore 34 as core 12 on socket 1 00:04:25.320 EAL: Detected lcore 35 as core 13 on socket 1 00:04:25.320 EAL: Detected lcore 36 as core 16 on socket 1 00:04:25.320 EAL: Detected lcore 37 as core 17 on socket 1 00:04:25.320 EAL: Detected lcore 38 as core 18 on socket 1 00:04:25.320 EAL: Detected lcore 39 as core 19 on socket 1 00:04:25.320 EAL: Detected lcore 40 as core 20 on socket 1 00:04:25.320 EAL: Detected lcore 41 as core 21 on socket 1 00:04:25.320 EAL: Detected lcore 42 as core 24 on socket 1 00:04:25.320 EAL: Detected lcore 43 as core 25 on socket 1 00:04:25.320 EAL: Detected lcore 44 as core 26 on socket 1 00:04:25.320 EAL: Detected lcore 45 as core 27 on socket 1 00:04:25.320 EAL: Detected lcore 46 as core 28 on socket 1 00:04:25.320 EAL: Detected lcore 47 as core 29 on socket 1 00:04:25.320 EAL: Detected lcore 48 as core 0 on socket 0 00:04:25.320 EAL: Detected lcore 49 as core 1 on socket 0 00:04:25.320 EAL: Detected lcore 50 as core 2 on socket 0 00:04:25.320 EAL: Detected lcore 51 as core 3 on socket 0 00:04:25.320 EAL: Detected lcore 52 as core 4 on socket 0 00:04:25.320 EAL: Detected lcore 53 as core 5 on socket 0 00:04:25.320 EAL: Detected lcore 54 as core 6 on socket 0 00:04:25.320 EAL: Detected lcore 55 as core 8 on socket 0 00:04:25.320 EAL: Detected lcore 56 as core 9 on socket 0 00:04:25.320 EAL: Detected lcore 57 as core 10 on socket 0 00:04:25.320 EAL: Detected lcore 58 as core 11 on socket 0 00:04:25.320 EAL: Detected lcore 59 as core 12 on socket 0 00:04:25.320 EAL: Detected lcore 60 as core 13 on socket 0 00:04:25.320 EAL: Detected lcore 61 as core 16 on socket 0 00:04:25.320 EAL: Detected lcore 62 as core 17 on socket 0 00:04:25.320 EAL: Detected lcore 63 as core 18 on socket 0 00:04:25.320 EAL: Detected lcore 64 as core 19 on socket 0 00:04:25.320 EAL: Detected lcore 65 as core 20 on socket 0 00:04:25.320 EAL: Detected lcore 66 as core 21 on socket 0 00:04:25.320 EAL: Detected lcore 67 as core 25 on socket 0 00:04:25.320 EAL: Detected lcore 68 as core 26 on socket 0 00:04:25.320 EAL: Detected lcore 69 as core 27 on socket 0 00:04:25.321 EAL: Detected lcore 70 as core 28 on socket 0 00:04:25.321 EAL: Detected lcore 71 as core 29 on socket 0 00:04:25.321 EAL: Detected lcore 72 as core 0 on socket 1 00:04:25.321 EAL: Detected lcore 73 as core 1 on socket 1 00:04:25.321 EAL: Detected lcore 74 as core 2 on socket 1 00:04:25.321 EAL: Detected lcore 75 as core 3 on socket 1 00:04:25.321 EAL: Detected lcore 76 as core 4 on socket 1 00:04:25.321 EAL: Detected lcore 77 as core 5 on socket 1 00:04:25.321 EAL: Detected lcore 78 as core 6 on socket 1 00:04:25.321 EAL: Detected lcore 79 as core 9 on socket 1 00:04:25.321 EAL: Detected lcore 80 as core 10 on socket 1 00:04:25.321 EAL: Detected lcore 81 as core 11 on socket 1 00:04:25.321 EAL: Detected lcore 82 as core 12 on socket 1 00:04:25.321 EAL: Detected lcore 83 as core 13 on socket 1 00:04:25.321 EAL: Detected lcore 84 as core 16 on socket 1 00:04:25.321 EAL: Detected lcore 85 as core 17 on socket 1 00:04:25.321 EAL: Detected lcore 86 as core 18 on socket 1 00:04:25.321 EAL: Detected lcore 87 as core 19 on socket 1 00:04:25.321 EAL: Detected lcore 88 as core 20 on socket 1 00:04:25.321 EAL: Detected lcore 89 as core 21 on socket 1 00:04:25.321 EAL: Detected lcore 90 as core 24 on socket 1 00:04:25.321 EAL: Detected lcore 91 as core 25 on socket 1 00:04:25.321 EAL: Detected lcore 92 as core 26 on socket 1 00:04:25.321 EAL: Detected lcore 93 as core 27 on socket 1 00:04:25.321 EAL: Detected lcore 94 as core 28 on socket 1 00:04:25.321 EAL: Detected lcore 95 as core 29 on socket 1 00:04:25.321 EAL: Maximum logical cores by configuration: 128 00:04:25.321 EAL: Detected CPU lcores: 96 00:04:25.321 EAL: Detected NUMA nodes: 2 00:04:25.321 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:25.321 EAL: Detected shared linkage of DPDK 00:04:25.321 EAL: No shared files mode enabled, IPC will be disabled 00:04:25.321 EAL: Bus pci wants IOVA as 'DC' 00:04:25.321 EAL: Buses did not request a specific IOVA mode. 00:04:25.321 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:25.321 EAL: Selected IOVA mode 'VA' 00:04:25.321 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.321 EAL: Probing VFIO support... 00:04:25.321 EAL: IOMMU type 1 (Type 1) is supported 00:04:25.321 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:25.321 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:25.321 EAL: VFIO support initialized 00:04:25.321 EAL: Ask a virtual area of 0x2e000 bytes 00:04:25.321 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:25.321 EAL: Setting up physically contiguous memory... 00:04:25.321 EAL: Setting maximum number of open files to 524288 00:04:25.321 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:25.321 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:25.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:25.321 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:25.321 EAL: Ask a virtual area of 0x61000 bytes 00:04:25.321 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:25.321 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:25.321 EAL: Ask a virtual area of 0x400000000 bytes 00:04:25.321 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:25.321 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:25.321 EAL: Hugepages will be freed exactly as allocated. 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: TSC frequency is ~2300000 KHz 00:04:25.321 EAL: Main lcore 0 is ready (tid=7f73a3ad5a00;cpuset=[0]) 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 0 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 2MB 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:25.321 EAL: Mem event callback 'spdk:(nil)' registered 00:04:25.321 00:04:25.321 00:04:25.321 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.321 http://cunit.sourceforge.net/ 00:04:25.321 00:04:25.321 00:04:25.321 Suite: components_suite 00:04:25.321 Test: vtophys_malloc_test ...passed 00:04:25.321 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 4 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 4MB 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was shrunk by 4MB 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 4 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 6MB 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was shrunk by 6MB 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 4 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 10MB 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was shrunk by 10MB 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 4 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 18MB 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was shrunk by 18MB 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 4 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 34MB 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was shrunk by 34MB 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.321 EAL: Restoring previous memory policy: 4 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was expanded by 66MB 00:04:25.321 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.321 EAL: request: mp_malloc_sync 00:04:25.321 EAL: No shared files mode enabled, IPC is disabled 00:04:25.321 EAL: Heap on socket 0 was shrunk by 66MB 00:04:25.321 EAL: Trying to obtain current memory policy. 00:04:25.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.579 EAL: Restoring previous memory policy: 4 00:04:25.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.579 EAL: request: mp_malloc_sync 00:04:25.579 EAL: No shared files mode enabled, IPC is disabled 00:04:25.579 EAL: Heap on socket 0 was expanded by 130MB 00:04:25.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.579 EAL: request: mp_malloc_sync 00:04:25.579 EAL: No shared files mode enabled, IPC is disabled 00:04:25.579 EAL: Heap on socket 0 was shrunk by 130MB 00:04:25.579 EAL: Trying to obtain current memory policy. 00:04:25.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.579 EAL: Restoring previous memory policy: 4 00:04:25.579 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.579 EAL: request: mp_malloc_sync 00:04:25.579 EAL: No shared files mode enabled, IPC is disabled 00:04:25.579 EAL: Heap on socket 0 was expanded by 258MB 00:04:25.580 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.580 EAL: request: mp_malloc_sync 00:04:25.580 EAL: No shared files mode enabled, IPC is disabled 00:04:25.580 EAL: Heap on socket 0 was shrunk by 258MB 00:04:25.580 EAL: Trying to obtain current memory policy. 00:04:25.580 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.837 EAL: Restoring previous memory policy: 4 00:04:25.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.837 EAL: request: mp_malloc_sync 00:04:25.837 EAL: No shared files mode enabled, IPC is disabled 00:04:25.837 EAL: Heap on socket 0 was expanded by 514MB 00:04:25.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.837 EAL: request: mp_malloc_sync 00:04:25.837 EAL: No shared files mode enabled, IPC is disabled 00:04:25.837 EAL: Heap on socket 0 was shrunk by 514MB 00:04:25.837 EAL: Trying to obtain current memory policy. 00:04:25.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.096 EAL: Restoring previous memory policy: 4 00:04:26.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.096 EAL: request: mp_malloc_sync 00:04:26.096 EAL: No shared files mode enabled, IPC is disabled 00:04:26.096 EAL: Heap on socket 0 was expanded by 1026MB 00:04:26.096 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.354 EAL: request: mp_malloc_sync 00:04:26.354 EAL: No shared files mode enabled, IPC is disabled 00:04:26.354 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:26.354 passed 00:04:26.354 00:04:26.354 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.354 suites 1 1 n/a 0 0 00:04:26.354 tests 2 2 2 0 0 00:04:26.354 asserts 497 497 497 0 n/a 00:04:26.354 00:04:26.354 Elapsed time = 0.960 seconds 00:04:26.354 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.354 EAL: request: mp_malloc_sync 00:04:26.354 EAL: No shared files mode enabled, IPC is disabled 00:04:26.354 EAL: Heap on socket 0 was shrunk by 2MB 00:04:26.354 EAL: No shared files mode enabled, IPC is disabled 00:04:26.354 EAL: No shared files mode enabled, IPC is disabled 00:04:26.354 EAL: No shared files mode enabled, IPC is disabled 00:04:26.354 00:04:26.354 real 0m1.069s 00:04:26.354 user 0m0.628s 00:04:26.354 sys 0m0.413s 00:04:26.355 13:46:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.355 13:46:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.355 ************************************ 00:04:26.355 END TEST env_vtophys 00:04:26.355 ************************************ 00:04:26.355 13:46:17 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.355 13:46:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:26.355 13:46:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.355 13:46:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.355 ************************************ 00:04:26.355 START TEST env_pci 00:04:26.355 ************************************ 00:04:26.355 13:46:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:26.355 00:04:26.355 00:04:26.355 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.355 http://cunit.sourceforge.net/ 00:04:26.355 00:04:26.355 00:04:26.355 Suite: pci 00:04:26.355 Test: pci_hook ...[2024-07-23 13:46:17.317437] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3075365 has claimed it 00:04:26.355 EAL: Cannot find device (10000:00:01.0) 00:04:26.355 EAL: Failed to attach device on primary process 00:04:26.355 passed 00:04:26.355 00:04:26.355 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.355 suites 1 1 n/a 0 0 00:04:26.355 tests 1 1 1 0 0 00:04:26.355 asserts 25 25 25 0 n/a 00:04:26.355 00:04:26.355 Elapsed time = 0.026 seconds 00:04:26.355 00:04:26.355 real 0m0.045s 00:04:26.355 user 0m0.012s 00:04:26.355 sys 0m0.033s 00:04:26.355 13:46:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.355 13:46:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.355 ************************************ 00:04:26.355 END TEST env_pci 00:04:26.355 ************************************ 00:04:26.614 13:46:17 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:26.614 13:46:17 -- env/env.sh@15 -- # uname 00:04:26.614 13:46:17 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:26.614 13:46:17 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:26.614 13:46:17 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.614 13:46:17 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:04:26.614 13:46:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:26.614 13:46:17 -- common/autotest_common.sh@10 -- # set +x 00:04:26.614 ************************************ 00:04:26.614 START TEST env_dpdk_post_init 00:04:26.614 ************************************ 00:04:26.614 13:46:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:26.614 EAL: Detected CPU lcores: 96 00:04:26.614 EAL: Detected NUMA nodes: 2 00:04:26.614 EAL: Detected shared linkage of DPDK 00:04:26.614 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:26.614 EAL: Selected IOVA mode 'VA' 00:04:26.614 EAL: No free 2048 kB hugepages reported on node 1 00:04:26.614 EAL: VFIO support initialized 00:04:26.614 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:26.614 EAL: Using IOMMU type 1 (Type 1) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:26.614 EAL: Ignore mapping IO port bar(1) 00:04:26.614 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:27.550 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:27.550 EAL: Ignore mapping IO port bar(1) 00:04:27.550 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:30.832 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:30.832 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:30.832 Starting DPDK initialization... 00:04:30.832 Starting SPDK post initialization... 00:04:30.832 SPDK NVMe probe 00:04:30.832 Attaching to 0000:5e:00.0 00:04:30.832 Attached to 0000:5e:00.0 00:04:30.832 Cleaning up... 00:04:30.832 00:04:30.832 real 0m4.353s 00:04:30.832 user 0m3.294s 00:04:30.832 sys 0m0.127s 00:04:30.832 13:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.832 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.832 ************************************ 00:04:30.832 END TEST env_dpdk_post_init 00:04:30.832 ************************************ 00:04:30.832 13:46:21 -- env/env.sh@26 -- # uname 00:04:30.832 13:46:21 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:30.832 13:46:21 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.832 13:46:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:30.832 13:46:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:30.832 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.832 ************************************ 00:04:30.832 START TEST env_mem_callbacks 00:04:30.832 ************************************ 00:04:30.832 13:46:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.832 EAL: Detected CPU lcores: 96 00:04:30.832 EAL: Detected NUMA nodes: 2 00:04:30.832 EAL: Detected shared linkage of DPDK 00:04:30.832 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.832 EAL: Selected IOVA mode 'VA' 00:04:30.832 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.832 EAL: VFIO support initialized 00:04:30.832 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.832 00:04:30.832 00:04:30.832 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.832 http://cunit.sourceforge.net/ 00:04:30.832 00:04:30.832 00:04:30.832 Suite: memory 00:04:30.832 Test: test ... 00:04:30.832 register 0x200000200000 2097152 00:04:30.832 malloc 3145728 00:04:30.832 register 0x200000400000 4194304 00:04:30.832 buf 0x200000500000 len 3145728 PASSED 00:04:30.832 malloc 64 00:04:30.832 buf 0x2000004fff40 len 64 PASSED 00:04:30.832 malloc 4194304 00:04:30.832 register 0x200000800000 6291456 00:04:30.832 buf 0x200000a00000 len 4194304 PASSED 00:04:30.832 free 0x200000500000 3145728 00:04:30.832 free 0x2000004fff40 64 00:04:30.832 unregister 0x200000400000 4194304 PASSED 00:04:30.832 free 0x200000a00000 4194304 00:04:30.832 unregister 0x200000800000 6291456 PASSED 00:04:30.832 malloc 8388608 00:04:30.832 register 0x200000400000 10485760 00:04:30.832 buf 0x200000600000 len 8388608 PASSED 00:04:30.832 free 0x200000600000 8388608 00:04:30.832 unregister 0x200000400000 10485760 PASSED 00:04:30.832 passed 00:04:30.832 00:04:30.832 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.832 suites 1 1 n/a 0 0 00:04:30.832 tests 1 1 1 0 0 00:04:30.832 asserts 15 15 15 0 n/a 00:04:30.832 00:04:30.832 Elapsed time = 0.004 seconds 00:04:30.832 00:04:30.832 real 0m0.052s 00:04:30.832 user 0m0.015s 00:04:30.832 sys 0m0.037s 00:04:30.832 13:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.832 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:30.832 ************************************ 00:04:30.832 END TEST env_mem_callbacks 00:04:30.832 ************************************ 00:04:30.832 00:04:30.832 real 0m5.900s 00:04:30.832 user 0m4.161s 00:04:30.832 sys 0m0.803s 00:04:30.832 13:46:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.091 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:31.091 ************************************ 00:04:31.091 END TEST env 00:04:31.091 ************************************ 00:04:31.091 13:46:21 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:31.091 13:46:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.091 13:46:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.091 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:31.091 ************************************ 00:04:31.091 START TEST rpc 00:04:31.091 ************************************ 00:04:31.091 13:46:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:31.091 * Looking for test storage... 00:04:31.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.091 13:46:21 -- rpc/rpc.sh@65 -- # spdk_pid=3076186 00:04:31.091 13:46:21 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:31.091 13:46:21 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:31.091 13:46:21 -- rpc/rpc.sh@67 -- # waitforlisten 3076186 00:04:31.091 13:46:21 -- common/autotest_common.sh@819 -- # '[' -z 3076186 ']' 00:04:31.091 13:46:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.091 13:46:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:31.091 13:46:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.091 13:46:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:31.091 13:46:21 -- common/autotest_common.sh@10 -- # set +x 00:04:31.091 [2024-07-23 13:46:22.016394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:31.092 [2024-07-23 13:46:22.016444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076186 ] 00:04:31.092 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.092 [2024-07-23 13:46:22.072464] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.350 [2024-07-23 13:46:22.144459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:31.350 [2024-07-23 13:46:22.144571] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:31.350 [2024-07-23 13:46:22.144580] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3076186' to capture a snapshot of events at runtime. 00:04:31.350 [2024-07-23 13:46:22.144586] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3076186 for offline analysis/debug. 00:04:31.350 [2024-07-23 13:46:22.144604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.917 13:46:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:31.917 13:46:22 -- common/autotest_common.sh@852 -- # return 0 00:04:31.917 13:46:22 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.917 13:46:22 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:31.917 13:46:22 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.917 13:46:22 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.917 13:46:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:31.917 13:46:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:31.917 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:31.917 ************************************ 00:04:31.917 START TEST rpc_integrity 00:04:31.917 ************************************ 00:04:31.917 13:46:22 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:31.917 13:46:22 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.917 13:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.917 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:31.917 13:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.917 13:46:22 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:31.917 13:46:22 -- rpc/rpc.sh@13 -- # jq length 00:04:31.917 13:46:22 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:31.917 13:46:22 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:31.917 13:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.917 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:31.917 13:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.917 13:46:22 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:31.917 13:46:22 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:31.917 13:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:31.917 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:31.917 13:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:31.917 13:46:22 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:31.917 { 00:04:31.917 "name": "Malloc0", 00:04:31.917 "aliases": [ 00:04:31.917 "cea3c922-33f8-4e2f-b020-7b8f5edae406" 00:04:31.917 ], 00:04:31.917 "product_name": "Malloc disk", 00:04:31.917 "block_size": 512, 00:04:31.917 "num_blocks": 16384, 00:04:31.917 "uuid": "cea3c922-33f8-4e2f-b020-7b8f5edae406", 00:04:31.917 "assigned_rate_limits": { 00:04:31.917 "rw_ios_per_sec": 0, 00:04:31.917 "rw_mbytes_per_sec": 0, 00:04:31.917 "r_mbytes_per_sec": 0, 00:04:31.917 "w_mbytes_per_sec": 0 00:04:31.917 }, 00:04:31.917 "claimed": false, 00:04:31.917 "zoned": false, 00:04:31.917 "supported_io_types": { 00:04:31.917 "read": true, 00:04:31.917 "write": true, 00:04:31.917 "unmap": true, 00:04:31.917 "write_zeroes": true, 00:04:31.917 "flush": true, 00:04:31.917 "reset": true, 00:04:31.917 "compare": false, 00:04:31.917 "compare_and_write": false, 00:04:31.917 "abort": true, 00:04:31.917 "nvme_admin": false, 00:04:31.917 "nvme_io": false 00:04:31.917 }, 00:04:31.917 "memory_domains": [ 00:04:31.917 { 00:04:31.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:31.917 "dma_device_type": 2 00:04:31.917 } 00:04:31.917 ], 00:04:31.917 "driver_specific": {} 00:04:31.917 } 00:04:31.917 ]' 00:04:31.917 13:46:22 -- rpc/rpc.sh@17 -- # jq length 00:04:32.176 13:46:22 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.176 13:46:22 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:32.176 13:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.176 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:32.176 [2024-07-23 13:46:22.955759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:32.176 [2024-07-23 13:46:22.955795] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.176 [2024-07-23 13:46:22.955810] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb15860 00:04:32.176 [2024-07-23 13:46:22.955817] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.176 [2024-07-23 13:46:22.956924] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.176 [2024-07-23 13:46:22.956945] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.176 Passthru0 00:04:32.176 13:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.176 13:46:22 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.176 13:46:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.176 13:46:22 -- common/autotest_common.sh@10 -- # set +x 00:04:32.176 13:46:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.176 13:46:22 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.176 { 00:04:32.176 "name": "Malloc0", 00:04:32.176 "aliases": [ 00:04:32.176 "cea3c922-33f8-4e2f-b020-7b8f5edae406" 00:04:32.176 ], 00:04:32.176 "product_name": "Malloc disk", 00:04:32.176 "block_size": 512, 00:04:32.176 "num_blocks": 16384, 00:04:32.176 "uuid": "cea3c922-33f8-4e2f-b020-7b8f5edae406", 00:04:32.176 "assigned_rate_limits": { 00:04:32.176 "rw_ios_per_sec": 0, 00:04:32.176 "rw_mbytes_per_sec": 0, 00:04:32.176 "r_mbytes_per_sec": 0, 00:04:32.176 "w_mbytes_per_sec": 0 00:04:32.176 }, 00:04:32.176 "claimed": true, 00:04:32.176 "claim_type": "exclusive_write", 00:04:32.176 "zoned": false, 00:04:32.176 "supported_io_types": { 00:04:32.176 "read": true, 00:04:32.176 "write": true, 00:04:32.176 "unmap": true, 00:04:32.176 "write_zeroes": true, 00:04:32.176 "flush": true, 00:04:32.176 "reset": true, 00:04:32.176 "compare": false, 00:04:32.176 "compare_and_write": false, 00:04:32.176 "abort": true, 00:04:32.176 "nvme_admin": false, 00:04:32.176 "nvme_io": false 00:04:32.176 }, 00:04:32.176 "memory_domains": [ 00:04:32.176 { 00:04:32.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.176 "dma_device_type": 2 00:04:32.176 } 00:04:32.176 ], 00:04:32.176 "driver_specific": {} 00:04:32.176 }, 00:04:32.176 { 00:04:32.176 "name": "Passthru0", 00:04:32.176 "aliases": [ 00:04:32.176 "e70e5f42-9223-50f2-9361-2b49bc56c7ce" 00:04:32.176 ], 00:04:32.176 "product_name": "passthru", 00:04:32.176 "block_size": 512, 00:04:32.176 "num_blocks": 16384, 00:04:32.176 "uuid": "e70e5f42-9223-50f2-9361-2b49bc56c7ce", 00:04:32.176 "assigned_rate_limits": { 00:04:32.176 "rw_ios_per_sec": 0, 00:04:32.176 "rw_mbytes_per_sec": 0, 00:04:32.177 "r_mbytes_per_sec": 0, 00:04:32.177 "w_mbytes_per_sec": 0 00:04:32.177 }, 00:04:32.177 "claimed": false, 00:04:32.177 "zoned": false, 00:04:32.177 "supported_io_types": { 00:04:32.177 "read": true, 00:04:32.177 "write": true, 00:04:32.177 "unmap": true, 00:04:32.177 "write_zeroes": true, 00:04:32.177 "flush": true, 00:04:32.177 "reset": true, 00:04:32.177 "compare": false, 00:04:32.177 "compare_and_write": false, 00:04:32.177 "abort": true, 00:04:32.177 "nvme_admin": false, 00:04:32.177 "nvme_io": false 00:04:32.177 }, 00:04:32.177 "memory_domains": [ 00:04:32.177 { 00:04:32.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.177 "dma_device_type": 2 00:04:32.177 } 00:04:32.177 ], 00:04:32.177 "driver_specific": { 00:04:32.177 "passthru": { 00:04:32.177 "name": "Passthru0", 00:04:32.177 "base_bdev_name": "Malloc0" 00:04:32.177 } 00:04:32.177 } 00:04:32.177 } 00:04:32.177 ]' 00:04:32.177 13:46:22 -- rpc/rpc.sh@21 -- # jq length 00:04:32.177 13:46:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.177 13:46:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.177 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.177 13:46:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:32.177 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.177 13:46:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.177 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.177 13:46:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.177 13:46:23 -- rpc/rpc.sh@26 -- # jq length 00:04:32.177 13:46:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.177 00:04:32.177 real 0m0.258s 00:04:32.177 user 0m0.178s 00:04:32.177 sys 0m0.022s 00:04:32.177 13:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 ************************************ 00:04:32.177 END TEST rpc_integrity 00:04:32.177 ************************************ 00:04:32.177 13:46:23 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:32.177 13:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.177 13:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 ************************************ 00:04:32.177 START TEST rpc_plugins 00:04:32.177 ************************************ 00:04:32.177 13:46:23 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:04:32.177 13:46:23 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.177 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.177 13:46:23 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.177 13:46:23 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.177 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.177 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.177 13:46:23 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.177 { 00:04:32.177 "name": "Malloc1", 00:04:32.177 "aliases": [ 00:04:32.177 "9f408fbb-28f1-46ea-968c-fc211935c661" 00:04:32.177 ], 00:04:32.177 "product_name": "Malloc disk", 00:04:32.177 "block_size": 4096, 00:04:32.177 "num_blocks": 256, 00:04:32.177 "uuid": "9f408fbb-28f1-46ea-968c-fc211935c661", 00:04:32.177 "assigned_rate_limits": { 00:04:32.177 "rw_ios_per_sec": 0, 00:04:32.177 "rw_mbytes_per_sec": 0, 00:04:32.177 "r_mbytes_per_sec": 0, 00:04:32.177 "w_mbytes_per_sec": 0 00:04:32.177 }, 00:04:32.177 "claimed": false, 00:04:32.177 "zoned": false, 00:04:32.177 "supported_io_types": { 00:04:32.177 "read": true, 00:04:32.177 "write": true, 00:04:32.177 "unmap": true, 00:04:32.177 "write_zeroes": true, 00:04:32.177 "flush": true, 00:04:32.177 "reset": true, 00:04:32.177 "compare": false, 00:04:32.177 "compare_and_write": false, 00:04:32.177 "abort": true, 00:04:32.177 "nvme_admin": false, 00:04:32.177 "nvme_io": false 00:04:32.177 }, 00:04:32.177 "memory_domains": [ 00:04:32.177 { 00:04:32.177 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.177 "dma_device_type": 2 00:04:32.177 } 00:04:32.177 ], 00:04:32.177 "driver_specific": {} 00:04:32.177 } 00:04:32.177 ]' 00:04:32.177 13:46:23 -- rpc/rpc.sh@32 -- # jq length 00:04:32.177 13:46:23 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.177 13:46:23 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.177 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.177 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.435 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.435 13:46:23 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.435 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.435 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.435 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.435 13:46:23 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.435 13:46:23 -- rpc/rpc.sh@36 -- # jq length 00:04:32.435 13:46:23 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.435 00:04:32.435 real 0m0.130s 00:04:32.435 user 0m0.088s 00:04:32.435 sys 0m0.013s 00:04:32.435 13:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.435 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.435 ************************************ 00:04:32.435 END TEST rpc_plugins 00:04:32.435 ************************************ 00:04:32.435 13:46:23 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.435 13:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.435 13:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.435 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.435 ************************************ 00:04:32.435 START TEST rpc_trace_cmd_test 00:04:32.435 ************************************ 00:04:32.435 13:46:23 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:04:32.435 13:46:23 -- rpc/rpc.sh@40 -- # local info 00:04:32.435 13:46:23 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.435 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.435 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.435 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.435 13:46:23 -- rpc/rpc.sh@42 -- # info='{ 00:04:32.435 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3076186", 00:04:32.435 "tpoint_group_mask": "0x8", 00:04:32.435 "iscsi_conn": { 00:04:32.435 "mask": "0x2", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "scsi": { 00:04:32.435 "mask": "0x4", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "bdev": { 00:04:32.435 "mask": "0x8", 00:04:32.435 "tpoint_mask": "0xffffffffffffffff" 00:04:32.435 }, 00:04:32.435 "nvmf_rdma": { 00:04:32.435 "mask": "0x10", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "nvmf_tcp": { 00:04:32.435 "mask": "0x20", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "ftl": { 00:04:32.435 "mask": "0x40", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "blobfs": { 00:04:32.435 "mask": "0x80", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "dsa": { 00:04:32.435 "mask": "0x200", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "thread": { 00:04:32.435 "mask": "0x400", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "nvme_pcie": { 00:04:32.435 "mask": "0x800", 00:04:32.435 "tpoint_mask": "0x0" 00:04:32.435 }, 00:04:32.435 "iaa": { 00:04:32.435 "mask": "0x1000", 00:04:32.436 "tpoint_mask": "0x0" 00:04:32.436 }, 00:04:32.436 "nvme_tcp": { 00:04:32.436 "mask": "0x2000", 00:04:32.436 "tpoint_mask": "0x0" 00:04:32.436 }, 00:04:32.436 "bdev_nvme": { 00:04:32.436 "mask": "0x4000", 00:04:32.436 "tpoint_mask": "0x0" 00:04:32.436 } 00:04:32.436 }' 00:04:32.436 13:46:23 -- rpc/rpc.sh@43 -- # jq length 00:04:32.436 13:46:23 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:32.436 13:46:23 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.436 13:46:23 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.436 13:46:23 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.436 13:46:23 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.436 13:46:23 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.694 13:46:23 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.694 13:46:23 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.694 13:46:23 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.694 00:04:32.694 real 0m0.220s 00:04:32.694 user 0m0.189s 00:04:32.694 sys 0m0.022s 00:04:32.694 13:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 ************************************ 00:04:32.694 END TEST rpc_trace_cmd_test 00:04:32.694 ************************************ 00:04:32.694 13:46:23 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.694 13:46:23 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.694 13:46:23 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.694 13:46:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:32.694 13:46:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 ************************************ 00:04:32.694 START TEST rpc_daemon_integrity 00:04:32.694 ************************************ 00:04:32.694 13:46:23 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:04:32.694 13:46:23 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.694 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.694 13:46:23 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.694 13:46:23 -- rpc/rpc.sh@13 -- # jq length 00:04:32.694 13:46:23 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.694 13:46:23 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.694 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.694 13:46:23 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:32.694 13:46:23 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.694 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.694 13:46:23 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.694 { 00:04:32.694 "name": "Malloc2", 00:04:32.694 "aliases": [ 00:04:32.694 "295da829-9519-4214-8619-dc23ebcd92ac" 00:04:32.694 ], 00:04:32.694 "product_name": "Malloc disk", 00:04:32.694 "block_size": 512, 00:04:32.694 "num_blocks": 16384, 00:04:32.694 "uuid": "295da829-9519-4214-8619-dc23ebcd92ac", 00:04:32.694 "assigned_rate_limits": { 00:04:32.694 "rw_ios_per_sec": 0, 00:04:32.694 "rw_mbytes_per_sec": 0, 00:04:32.694 "r_mbytes_per_sec": 0, 00:04:32.694 "w_mbytes_per_sec": 0 00:04:32.694 }, 00:04:32.694 "claimed": false, 00:04:32.694 "zoned": false, 00:04:32.694 "supported_io_types": { 00:04:32.694 "read": true, 00:04:32.694 "write": true, 00:04:32.694 "unmap": true, 00:04:32.694 "write_zeroes": true, 00:04:32.694 "flush": true, 00:04:32.694 "reset": true, 00:04:32.694 "compare": false, 00:04:32.694 "compare_and_write": false, 00:04:32.694 "abort": true, 00:04:32.694 "nvme_admin": false, 00:04:32.694 "nvme_io": false 00:04:32.694 }, 00:04:32.694 "memory_domains": [ 00:04:32.694 { 00:04:32.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.694 "dma_device_type": 2 00:04:32.694 } 00:04:32.694 ], 00:04:32.694 "driver_specific": {} 00:04:32.694 } 00:04:32.694 ]' 00:04:32.694 13:46:23 -- rpc/rpc.sh@17 -- # jq length 00:04:32.694 13:46:23 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.694 13:46:23 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:32.694 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 [2024-07-23 13:46:23.657664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:32.694 [2024-07-23 13:46:23.657692] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.694 [2024-07-23 13:46:23.657706] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xb16360 00:04:32.694 [2024-07-23 13:46:23.657713] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.694 [2024-07-23 13:46:23.658670] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.694 [2024-07-23 13:46:23.658691] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.694 Passthru0 00:04:32.694 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.694 13:46:23 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.694 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.694 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.694 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.694 13:46:23 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.694 { 00:04:32.694 "name": "Malloc2", 00:04:32.694 "aliases": [ 00:04:32.694 "295da829-9519-4214-8619-dc23ebcd92ac" 00:04:32.694 ], 00:04:32.694 "product_name": "Malloc disk", 00:04:32.694 "block_size": 512, 00:04:32.694 "num_blocks": 16384, 00:04:32.694 "uuid": "295da829-9519-4214-8619-dc23ebcd92ac", 00:04:32.694 "assigned_rate_limits": { 00:04:32.694 "rw_ios_per_sec": 0, 00:04:32.694 "rw_mbytes_per_sec": 0, 00:04:32.694 "r_mbytes_per_sec": 0, 00:04:32.694 "w_mbytes_per_sec": 0 00:04:32.694 }, 00:04:32.694 "claimed": true, 00:04:32.694 "claim_type": "exclusive_write", 00:04:32.694 "zoned": false, 00:04:32.694 "supported_io_types": { 00:04:32.694 "read": true, 00:04:32.694 "write": true, 00:04:32.694 "unmap": true, 00:04:32.694 "write_zeroes": true, 00:04:32.694 "flush": true, 00:04:32.694 "reset": true, 00:04:32.694 "compare": false, 00:04:32.694 "compare_and_write": false, 00:04:32.694 "abort": true, 00:04:32.694 "nvme_admin": false, 00:04:32.694 "nvme_io": false 00:04:32.694 }, 00:04:32.694 "memory_domains": [ 00:04:32.694 { 00:04:32.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.694 "dma_device_type": 2 00:04:32.694 } 00:04:32.694 ], 00:04:32.694 "driver_specific": {} 00:04:32.694 }, 00:04:32.694 { 00:04:32.694 "name": "Passthru0", 00:04:32.694 "aliases": [ 00:04:32.694 "36976e2b-b15b-5363-92a1-ac9e08f6a962" 00:04:32.694 ], 00:04:32.694 "product_name": "passthru", 00:04:32.694 "block_size": 512, 00:04:32.694 "num_blocks": 16384, 00:04:32.694 "uuid": "36976e2b-b15b-5363-92a1-ac9e08f6a962", 00:04:32.694 "assigned_rate_limits": { 00:04:32.694 "rw_ios_per_sec": 0, 00:04:32.694 "rw_mbytes_per_sec": 0, 00:04:32.694 "r_mbytes_per_sec": 0, 00:04:32.694 "w_mbytes_per_sec": 0 00:04:32.694 }, 00:04:32.694 "claimed": false, 00:04:32.694 "zoned": false, 00:04:32.694 "supported_io_types": { 00:04:32.694 "read": true, 00:04:32.694 "write": true, 00:04:32.694 "unmap": true, 00:04:32.694 "write_zeroes": true, 00:04:32.694 "flush": true, 00:04:32.694 "reset": true, 00:04:32.694 "compare": false, 00:04:32.694 "compare_and_write": false, 00:04:32.694 "abort": true, 00:04:32.694 "nvme_admin": false, 00:04:32.694 "nvme_io": false 00:04:32.694 }, 00:04:32.694 "memory_domains": [ 00:04:32.694 { 00:04:32.694 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.694 "dma_device_type": 2 00:04:32.694 } 00:04:32.694 ], 00:04:32.694 "driver_specific": { 00:04:32.694 "passthru": { 00:04:32.694 "name": "Passthru0", 00:04:32.694 "base_bdev_name": "Malloc2" 00:04:32.694 } 00:04:32.694 } 00:04:32.694 } 00:04:32.694 ]' 00:04:32.694 13:46:23 -- rpc/rpc.sh@21 -- # jq length 00:04:32.953 13:46:23 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.953 13:46:23 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.953 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.953 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.953 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.953 13:46:23 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:32.953 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.953 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.953 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.953 13:46:23 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.953 13:46:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:32.953 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.953 13:46:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:32.953 13:46:23 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.953 13:46:23 -- rpc/rpc.sh@26 -- # jq length 00:04:32.953 13:46:23 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.953 00:04:32.953 real 0m0.252s 00:04:32.953 user 0m0.169s 00:04:32.953 sys 0m0.028s 00:04:32.953 13:46:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.953 13:46:23 -- common/autotest_common.sh@10 -- # set +x 00:04:32.953 ************************************ 00:04:32.953 END TEST rpc_daemon_integrity 00:04:32.953 ************************************ 00:04:32.953 13:46:23 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:32.953 13:46:23 -- rpc/rpc.sh@84 -- # killprocess 3076186 00:04:32.953 13:46:23 -- common/autotest_common.sh@926 -- # '[' -z 3076186 ']' 00:04:32.953 13:46:23 -- common/autotest_common.sh@930 -- # kill -0 3076186 00:04:32.953 13:46:23 -- common/autotest_common.sh@931 -- # uname 00:04:32.953 13:46:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:32.953 13:46:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3076186 00:04:32.953 13:46:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:32.953 13:46:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:32.953 13:46:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3076186' 00:04:32.953 killing process with pid 3076186 00:04:32.953 13:46:23 -- common/autotest_common.sh@945 -- # kill 3076186 00:04:32.953 13:46:23 -- common/autotest_common.sh@950 -- # wait 3076186 00:04:33.212 00:04:33.212 real 0m2.314s 00:04:33.212 user 0m2.987s 00:04:33.212 sys 0m0.575s 00:04:33.212 13:46:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.212 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.212 ************************************ 00:04:33.212 END TEST rpc 00:04:33.212 ************************************ 00:04:33.471 13:46:24 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:33.471 13:46:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.471 13:46:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.471 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.471 ************************************ 00:04:33.471 START TEST rpc_client 00:04:33.471 ************************************ 00:04:33.471 13:46:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:33.471 * Looking for test storage... 00:04:33.471 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:33.471 13:46:24 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:33.471 OK 00:04:33.471 13:46:24 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:33.471 00:04:33.471 real 0m0.105s 00:04:33.471 user 0m0.047s 00:04:33.471 sys 0m0.065s 00:04:33.471 13:46:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.471 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.471 ************************************ 00:04:33.471 END TEST rpc_client 00:04:33.471 ************************************ 00:04:33.472 13:46:24 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:33.472 13:46:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:33.472 13:46:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:33.472 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.472 ************************************ 00:04:33.472 START TEST json_config 00:04:33.472 ************************************ 00:04:33.472 13:46:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:33.472 13:46:24 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:33.472 13:46:24 -- nvmf/common.sh@7 -- # uname -s 00:04:33.472 13:46:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:33.472 13:46:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:33.472 13:46:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:33.472 13:46:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:33.472 13:46:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:33.472 13:46:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:33.472 13:46:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:33.472 13:46:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:33.472 13:46:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:33.472 13:46:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:33.472 13:46:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:33.472 13:46:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:33.472 13:46:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:33.472 13:46:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:33.472 13:46:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:33.472 13:46:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:33.472 13:46:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:33.472 13:46:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:33.472 13:46:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:33.472 13:46:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.472 13:46:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.472 13:46:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.472 13:46:24 -- paths/export.sh@5 -- # export PATH 00:04:33.472 13:46:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:33.472 13:46:24 -- nvmf/common.sh@46 -- # : 0 00:04:33.472 13:46:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:33.472 13:46:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:33.472 13:46:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:33.472 13:46:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:33.472 13:46:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:33.472 13:46:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:33.472 13:46:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:33.472 13:46:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:33.472 13:46:24 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:33.472 13:46:24 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:04:33.472 13:46:24 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:04:33.472 13:46:24 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:33.472 13:46:24 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:04:33.472 13:46:24 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:33.472 13:46:24 -- json_config/json_config.sh@32 -- # declare -A app_params 00:04:33.472 13:46:24 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:33.472 13:46:24 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:04:33.472 13:46:24 -- json_config/json_config.sh@43 -- # last_event_id=0 00:04:33.472 13:46:24 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:33.472 13:46:24 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:04:33.472 INFO: JSON configuration test init 00:04:33.472 13:46:24 -- json_config/json_config.sh@420 -- # json_config_test_init 00:04:33.472 13:46:24 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:04:33.472 13:46:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:33.472 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.472 13:46:24 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:04:33.472 13:46:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:33.472 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.472 13:46:24 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:04:33.472 13:46:24 -- json_config/json_config.sh@98 -- # local app=target 00:04:33.472 13:46:24 -- json_config/json_config.sh@99 -- # shift 00:04:33.472 13:46:24 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:33.472 13:46:24 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:33.472 13:46:24 -- json_config/json_config.sh@111 -- # app_pid[$app]=3076855 00:04:33.472 13:46:24 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:33.472 Waiting for target to run... 00:04:33.472 13:46:24 -- json_config/json_config.sh@114 -- # waitforlisten 3076855 /var/tmp/spdk_tgt.sock 00:04:33.472 13:46:24 -- common/autotest_common.sh@819 -- # '[' -z 3076855 ']' 00:04:33.472 13:46:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.472 13:46:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:33.472 13:46:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.472 13:46:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:33.472 13:46:24 -- common/autotest_common.sh@10 -- # set +x 00:04:33.472 13:46:24 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:33.731 [2024-07-23 13:46:24.498593] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:33.731 [2024-07-23 13:46:24.498645] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3076855 ] 00:04:33.731 EAL: No free 2048 kB hugepages reported on node 1 00:04:33.989 [2024-07-23 13:46:24.764412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.990 [2024-07-23 13:46:24.830646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:33.990 [2024-07-23 13:46:24.830739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.556 13:46:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:34.557 13:46:25 -- common/autotest_common.sh@852 -- # return 0 00:04:34.557 13:46:25 -- json_config/json_config.sh@115 -- # echo '' 00:04:34.557 00:04:34.557 13:46:25 -- json_config/json_config.sh@322 -- # create_accel_config 00:04:34.557 13:46:25 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:04:34.557 13:46:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:34.557 13:46:25 -- common/autotest_common.sh@10 -- # set +x 00:04:34.557 13:46:25 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:04:34.557 13:46:25 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:04:34.557 13:46:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:34.557 13:46:25 -- common/autotest_common.sh@10 -- # set +x 00:04:34.557 13:46:25 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:34.557 13:46:25 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:04:34.557 13:46:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:37.842 13:46:28 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:04:37.843 13:46:28 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:04:37.843 13:46:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:37.843 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:04:37.843 13:46:28 -- json_config/json_config.sh@48 -- # local ret=0 00:04:37.843 13:46:28 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:37.843 13:46:28 -- json_config/json_config.sh@49 -- # local enabled_types 00:04:37.843 13:46:28 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:37.843 13:46:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:37.843 13:46:28 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:37.843 13:46:28 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:37.843 13:46:28 -- json_config/json_config.sh@51 -- # local get_types 00:04:37.843 13:46:28 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:04:37.843 13:46:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:37.843 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:04:37.843 13:46:28 -- json_config/json_config.sh@58 -- # return 0 00:04:37.843 13:46:28 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:04:37.843 13:46:28 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:04:37.843 13:46:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:37.843 13:46:28 -- common/autotest_common.sh@10 -- # set +x 00:04:37.843 13:46:28 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:37.843 13:46:28 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:04:37.843 13:46:28 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.843 13:46:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:37.843 MallocForNvmf0 00:04:37.843 13:46:28 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:37.843 13:46:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:38.101 MallocForNvmf1 00:04:38.101 13:46:28 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.101 13:46:28 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:38.101 [2024-07-23 13:46:29.040614] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:38.101 13:46:29 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.101 13:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:38.400 13:46:29 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.400 13:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:38.665 13:46:29 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.665 13:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:38.665 13:46:29 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.665 13:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:38.924 [2024-07-23 13:46:29.710729] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:38.924 13:46:29 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:04:38.924 13:46:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:38.924 13:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:38.924 13:46:29 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:04:38.924 13:46:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:38.924 13:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:38.924 13:46:29 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:04:38.924 13:46:29 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:38.924 13:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:39.182 MallocBdevForConfigChangeCheck 00:04:39.182 13:46:29 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:04:39.182 13:46:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:39.182 13:46:29 -- common/autotest_common.sh@10 -- # set +x 00:04:39.182 13:46:29 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:04:39.182 13:46:29 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.441 13:46:30 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:39.441 INFO: shutting down applications... 00:04:39.441 13:46:30 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:39.441 13:46:30 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:39.441 13:46:30 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:39.441 13:46:30 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:41.343 Calling clear_iscsi_subsystem 00:04:41.343 Calling clear_nvmf_subsystem 00:04:41.343 Calling clear_nbd_subsystem 00:04:41.343 Calling clear_ublk_subsystem 00:04:41.343 Calling clear_vhost_blk_subsystem 00:04:41.343 Calling clear_vhost_scsi_subsystem 00:04:41.343 Calling clear_scheduler_subsystem 00:04:41.343 Calling clear_bdev_subsystem 00:04:41.343 Calling clear_accel_subsystem 00:04:41.343 Calling clear_vmd_subsystem 00:04:41.343 Calling clear_sock_subsystem 00:04:41.343 Calling clear_iobuf_subsystem 00:04:41.343 13:46:31 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:41.343 13:46:31 -- json_config/json_config.sh@396 -- # count=100 00:04:41.343 13:46:31 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:41.343 13:46:31 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:41.343 13:46:31 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:41.343 13:46:31 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:41.343 13:46:32 -- json_config/json_config.sh@398 -- # break 00:04:41.343 13:46:32 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:41.343 13:46:32 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:41.343 13:46:32 -- json_config/json_config.sh@120 -- # local app=target 00:04:41.343 13:46:32 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:41.343 13:46:32 -- json_config/json_config.sh@124 -- # [[ -n 3076855 ]] 00:04:41.343 13:46:32 -- json_config/json_config.sh@127 -- # kill -SIGINT 3076855 00:04:41.343 13:46:32 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:41.343 13:46:32 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:41.343 13:46:32 -- json_config/json_config.sh@130 -- # kill -0 3076855 00:04:41.343 13:46:32 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:41.910 13:46:32 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:41.910 13:46:32 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:41.910 13:46:32 -- json_config/json_config.sh@130 -- # kill -0 3076855 00:04:41.910 13:46:32 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:41.910 13:46:32 -- json_config/json_config.sh@132 -- # break 00:04:41.910 13:46:32 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:41.910 13:46:32 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:41.910 SPDK target shutdown done 00:04:41.910 13:46:32 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:41.910 INFO: relaunching applications... 00:04:41.910 13:46:32 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.910 13:46:32 -- json_config/json_config.sh@98 -- # local app=target 00:04:41.910 13:46:32 -- json_config/json_config.sh@99 -- # shift 00:04:41.910 13:46:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:41.910 13:46:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:41.911 13:46:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:41.911 13:46:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:41.911 13:46:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:41.911 13:46:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=3078392 00:04:41.911 13:46:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:41.911 Waiting for target to run... 00:04:41.911 13:46:32 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:41.911 13:46:32 -- json_config/json_config.sh@114 -- # waitforlisten 3078392 /var/tmp/spdk_tgt.sock 00:04:41.911 13:46:32 -- common/autotest_common.sh@819 -- # '[' -z 3078392 ']' 00:04:41.911 13:46:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:41.911 13:46:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:41.911 13:46:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:41.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:41.911 13:46:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:41.911 13:46:32 -- common/autotest_common.sh@10 -- # set +x 00:04:41.911 [2024-07-23 13:46:32.719850] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:41.911 [2024-07-23 13:46:32.719899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3078392 ] 00:04:41.911 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.169 [2024-07-23 13:46:33.148163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.428 [2024-07-23 13:46:33.235839] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.428 [2024-07-23 13:46:33.235936] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.716 [2024-07-23 13:46:36.235010] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.716 [2024-07-23 13:46:36.267324] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:45.974 13:46:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.974 13:46:36 -- common/autotest_common.sh@852 -- # return 0 00:04:45.974 13:46:36 -- json_config/json_config.sh@115 -- # echo '' 00:04:45.974 00:04:45.974 13:46:36 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:45.974 13:46:36 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:45.974 INFO: Checking if target configuration is the same... 00:04:45.974 13:46:36 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.974 13:46:36 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:45.974 13:46:36 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.974 + '[' 2 -ne 2 ']' 00:04:45.974 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:45.974 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:45.974 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:45.974 +++ basename /dev/fd/62 00:04:45.974 ++ mktemp /tmp/62.XXX 00:04:45.974 + tmp_file_1=/tmp/62.b0O 00:04:45.974 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.974 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:45.974 + tmp_file_2=/tmp/spdk_tgt_config.json.1ht 00:04:45.974 + ret=0 00:04:45.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.232 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.232 + diff -u /tmp/62.b0O /tmp/spdk_tgt_config.json.1ht 00:04:46.232 + echo 'INFO: JSON config files are the same' 00:04:46.232 INFO: JSON config files are the same 00:04:46.232 + rm /tmp/62.b0O /tmp/spdk_tgt_config.json.1ht 00:04:46.232 + exit 0 00:04:46.232 13:46:37 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:46.232 13:46:37 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:46.232 INFO: changing configuration and checking if this can be detected... 00:04:46.232 13:46:37 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.232 13:46:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:46.491 13:46:37 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.491 13:46:37 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:46.491 13:46:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:46.491 + '[' 2 -ne 2 ']' 00:04:46.491 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:46.491 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:46.491 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:46.491 +++ basename /dev/fd/62 00:04:46.491 ++ mktemp /tmp/62.XXX 00:04:46.491 + tmp_file_1=/tmp/62.xqk 00:04:46.491 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:46.491 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:46.491 + tmp_file_2=/tmp/spdk_tgt_config.json.upV 00:04:46.491 + ret=0 00:04:46.491 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.751 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:46.751 + diff -u /tmp/62.xqk /tmp/spdk_tgt_config.json.upV 00:04:46.751 + ret=1 00:04:46.751 + echo '=== Start of file: /tmp/62.xqk ===' 00:04:46.751 + cat /tmp/62.xqk 00:04:46.751 + echo '=== End of file: /tmp/62.xqk ===' 00:04:46.751 + echo '' 00:04:46.751 + echo '=== Start of file: /tmp/spdk_tgt_config.json.upV ===' 00:04:46.751 + cat /tmp/spdk_tgt_config.json.upV 00:04:46.751 + echo '=== End of file: /tmp/spdk_tgt_config.json.upV ===' 00:04:46.751 + echo '' 00:04:46.751 + rm /tmp/62.xqk /tmp/spdk_tgt_config.json.upV 00:04:46.751 + exit 1 00:04:46.751 13:46:37 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:46.751 INFO: configuration change detected. 00:04:46.751 13:46:37 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:46.751 13:46:37 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:46.751 13:46:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.751 13:46:37 -- common/autotest_common.sh@10 -- # set +x 00:04:46.751 13:46:37 -- json_config/json_config.sh@360 -- # local ret=0 00:04:46.751 13:46:37 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:46.751 13:46:37 -- json_config/json_config.sh@370 -- # [[ -n 3078392 ]] 00:04:46.751 13:46:37 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:46.751 13:46:37 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:46.751 13:46:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:46.751 13:46:37 -- common/autotest_common.sh@10 -- # set +x 00:04:46.751 13:46:37 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:46.751 13:46:37 -- json_config/json_config.sh@246 -- # uname -s 00:04:46.751 13:46:37 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:46.751 13:46:37 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:46.751 13:46:37 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:46.751 13:46:37 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:46.751 13:46:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:46.751 13:46:37 -- common/autotest_common.sh@10 -- # set +x 00:04:46.751 13:46:37 -- json_config/json_config.sh@376 -- # killprocess 3078392 00:04:46.751 13:46:37 -- common/autotest_common.sh@926 -- # '[' -z 3078392 ']' 00:04:46.751 13:46:37 -- common/autotest_common.sh@930 -- # kill -0 3078392 00:04:46.751 13:46:37 -- common/autotest_common.sh@931 -- # uname 00:04:46.751 13:46:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:46.751 13:46:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3078392 00:04:46.751 13:46:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:46.751 13:46:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:46.751 13:46:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3078392' 00:04:46.751 killing process with pid 3078392 00:04:46.751 13:46:37 -- common/autotest_common.sh@945 -- # kill 3078392 00:04:46.751 13:46:37 -- common/autotest_common.sh@950 -- # wait 3078392 00:04:48.660 13:46:39 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:48.660 13:46:39 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:48.660 13:46:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:48.660 13:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:48.660 13:46:39 -- json_config/json_config.sh@381 -- # return 0 00:04:48.660 13:46:39 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:48.660 INFO: Success 00:04:48.660 00:04:48.660 real 0m14.952s 00:04:48.660 user 0m16.035s 00:04:48.660 sys 0m1.886s 00:04:48.660 13:46:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.660 13:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:48.660 ************************************ 00:04:48.660 END TEST json_config 00:04:48.660 ************************************ 00:04:48.660 13:46:39 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:48.660 13:46:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:48.660 13:46:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:48.660 13:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:48.660 ************************************ 00:04:48.660 START TEST json_config_extra_key 00:04:48.660 ************************************ 00:04:48.660 13:46:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:48.660 13:46:39 -- nvmf/common.sh@7 -- # uname -s 00:04:48.660 13:46:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:48.660 13:46:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:48.660 13:46:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:48.660 13:46:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:48.660 13:46:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:48.660 13:46:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:48.660 13:46:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:48.660 13:46:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:48.660 13:46:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:48.660 13:46:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:48.660 13:46:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:48.660 13:46:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:48.660 13:46:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:48.660 13:46:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:48.660 13:46:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:48.660 13:46:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:48.660 13:46:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:48.660 13:46:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:48.660 13:46:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:48.660 13:46:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.660 13:46:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.660 13:46:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.660 13:46:39 -- paths/export.sh@5 -- # export PATH 00:04:48.660 13:46:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:48.660 13:46:39 -- nvmf/common.sh@46 -- # : 0 00:04:48.660 13:46:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:48.660 13:46:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:48.660 13:46:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:48.660 13:46:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:48.660 13:46:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:48.660 13:46:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:48.660 13:46:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:48.660 13:46:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:48.660 INFO: launching applications... 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=3079674 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:48.660 Waiting for target to run... 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 3079674 /var/tmp/spdk_tgt.sock 00:04:48.660 13:46:39 -- common/autotest_common.sh@819 -- # '[' -z 3079674 ']' 00:04:48.660 13:46:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:48.660 13:46:39 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:48.660 13:46:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:48.660 13:46:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:48.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:48.660 13:46:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:48.660 13:46:39 -- common/autotest_common.sh@10 -- # set +x 00:04:48.660 [2024-07-23 13:46:39.505441] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:48.660 [2024-07-23 13:46:39.505495] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079674 ] 00:04:48.660 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.920 [2024-07-23 13:46:39.925498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.179 [2024-07-23 13:46:40.015413] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:49.179 [2024-07-23 13:46:40.015521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.439 13:46:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:49.439 13:46:40 -- common/autotest_common.sh@852 -- # return 0 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:49.439 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:49.439 INFO: shutting down applications... 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 3079674 ]] 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 3079674 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3079674 00:04:49.439 13:46:40 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@50 -- # kill -0 3079674 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:50.010 SPDK target shutdown done 00:04:50.010 13:46:40 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:50.010 Success 00:04:50.010 00:04:50.010 real 0m1.430s 00:04:50.010 user 0m1.068s 00:04:50.010 sys 0m0.532s 00:04:50.010 13:46:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.010 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:04:50.010 ************************************ 00:04:50.010 END TEST json_config_extra_key 00:04:50.010 ************************************ 00:04:50.010 13:46:40 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.010 13:46:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.010 13:46:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.010 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:04:50.010 ************************************ 00:04:50.010 START TEST alias_rpc 00:04:50.010 ************************************ 00:04:50.010 13:46:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:50.010 * Looking for test storage... 00:04:50.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:50.010 13:46:40 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:50.010 13:46:40 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3079965 00:04:50.010 13:46:40 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:50.010 13:46:40 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3079965 00:04:50.010 13:46:40 -- common/autotest_common.sh@819 -- # '[' -z 3079965 ']' 00:04:50.010 13:46:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.010 13:46:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.010 13:46:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.010 13:46:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.010 13:46:40 -- common/autotest_common.sh@10 -- # set +x 00:04:50.010 [2024-07-23 13:46:40.962604] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:50.010 [2024-07-23 13:46:40.962657] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3079965 ] 00:04:50.010 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.010 [2024-07-23 13:46:41.016954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.269 [2024-07-23 13:46:41.087888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.269 [2024-07-23 13:46:41.088009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.837 13:46:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:50.837 13:46:41 -- common/autotest_common.sh@852 -- # return 0 00:04:50.837 13:46:41 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:51.096 13:46:41 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3079965 00:04:51.096 13:46:41 -- common/autotest_common.sh@926 -- # '[' -z 3079965 ']' 00:04:51.096 13:46:41 -- common/autotest_common.sh@930 -- # kill -0 3079965 00:04:51.096 13:46:41 -- common/autotest_common.sh@931 -- # uname 00:04:51.096 13:46:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:51.096 13:46:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3079965 00:04:51.096 13:46:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:51.096 13:46:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:51.096 13:46:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3079965' 00:04:51.096 killing process with pid 3079965 00:04:51.096 13:46:41 -- common/autotest_common.sh@945 -- # kill 3079965 00:04:51.096 13:46:41 -- common/autotest_common.sh@950 -- # wait 3079965 00:04:51.355 00:04:51.355 real 0m1.494s 00:04:51.355 user 0m1.634s 00:04:51.355 sys 0m0.386s 00:04:51.355 13:46:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.355 13:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.355 ************************************ 00:04:51.355 END TEST alias_rpc 00:04:51.355 ************************************ 00:04:51.355 13:46:42 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:51.355 13:46:42 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:51.355 13:46:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:51.355 13:46:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:51.355 13:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.355 ************************************ 00:04:51.355 START TEST spdkcli_tcp 00:04:51.355 ************************************ 00:04:51.355 13:46:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:51.615 * Looking for test storage... 00:04:51.615 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:51.615 13:46:42 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:51.615 13:46:42 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:51.615 13:46:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:51.615 13:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3080248 00:04:51.615 13:46:42 -- spdkcli/tcp.sh@27 -- # waitforlisten 3080248 00:04:51.615 13:46:42 -- common/autotest_common.sh@819 -- # '[' -z 3080248 ']' 00:04:51.615 13:46:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.615 13:46:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.615 13:46:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.615 13:46:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.615 13:46:42 -- common/autotest_common.sh@10 -- # set +x 00:04:51.615 [2024-07-23 13:46:42.500069] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:51.615 [2024-07-23 13:46:42.500117] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080248 ] 00:04:51.615 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.615 [2024-07-23 13:46:42.553608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.615 [2024-07-23 13:46:42.631153] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.615 [2024-07-23 13:46:42.631338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.615 [2024-07-23 13:46:42.631342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.556 13:46:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:52.556 13:46:43 -- common/autotest_common.sh@852 -- # return 0 00:04:52.556 13:46:43 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:52.556 13:46:43 -- spdkcli/tcp.sh@31 -- # socat_pid=3080484 00:04:52.556 13:46:43 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:52.556 [ 00:04:52.556 "bdev_malloc_delete", 00:04:52.556 "bdev_malloc_create", 00:04:52.556 "bdev_null_resize", 00:04:52.556 "bdev_null_delete", 00:04:52.556 "bdev_null_create", 00:04:52.556 "bdev_nvme_cuse_unregister", 00:04:52.556 "bdev_nvme_cuse_register", 00:04:52.556 "bdev_opal_new_user", 00:04:52.556 "bdev_opal_set_lock_state", 00:04:52.556 "bdev_opal_delete", 00:04:52.556 "bdev_opal_get_info", 00:04:52.556 "bdev_opal_create", 00:04:52.556 "bdev_nvme_opal_revert", 00:04:52.556 "bdev_nvme_opal_init", 00:04:52.556 "bdev_nvme_send_cmd", 00:04:52.556 "bdev_nvme_get_path_iostat", 00:04:52.556 "bdev_nvme_get_mdns_discovery_info", 00:04:52.556 "bdev_nvme_stop_mdns_discovery", 00:04:52.556 "bdev_nvme_start_mdns_discovery", 00:04:52.556 "bdev_nvme_set_multipath_policy", 00:04:52.556 "bdev_nvme_set_preferred_path", 00:04:52.556 "bdev_nvme_get_io_paths", 00:04:52.556 "bdev_nvme_remove_error_injection", 00:04:52.556 "bdev_nvme_add_error_injection", 00:04:52.556 "bdev_nvme_get_discovery_info", 00:04:52.556 "bdev_nvme_stop_discovery", 00:04:52.556 "bdev_nvme_start_discovery", 00:04:52.556 "bdev_nvme_get_controller_health_info", 00:04:52.556 "bdev_nvme_disable_controller", 00:04:52.556 "bdev_nvme_enable_controller", 00:04:52.556 "bdev_nvme_reset_controller", 00:04:52.556 "bdev_nvme_get_transport_statistics", 00:04:52.556 "bdev_nvme_apply_firmware", 00:04:52.556 "bdev_nvme_detach_controller", 00:04:52.556 "bdev_nvme_get_controllers", 00:04:52.556 "bdev_nvme_attach_controller", 00:04:52.556 "bdev_nvme_set_hotplug", 00:04:52.556 "bdev_nvme_set_options", 00:04:52.556 "bdev_passthru_delete", 00:04:52.556 "bdev_passthru_create", 00:04:52.556 "bdev_lvol_grow_lvstore", 00:04:52.556 "bdev_lvol_get_lvols", 00:04:52.556 "bdev_lvol_get_lvstores", 00:04:52.556 "bdev_lvol_delete", 00:04:52.556 "bdev_lvol_set_read_only", 00:04:52.556 "bdev_lvol_resize", 00:04:52.556 "bdev_lvol_decouple_parent", 00:04:52.556 "bdev_lvol_inflate", 00:04:52.556 "bdev_lvol_rename", 00:04:52.556 "bdev_lvol_clone_bdev", 00:04:52.556 "bdev_lvol_clone", 00:04:52.556 "bdev_lvol_snapshot", 00:04:52.556 "bdev_lvol_create", 00:04:52.556 "bdev_lvol_delete_lvstore", 00:04:52.556 "bdev_lvol_rename_lvstore", 00:04:52.556 "bdev_lvol_create_lvstore", 00:04:52.556 "bdev_raid_set_options", 00:04:52.556 "bdev_raid_remove_base_bdev", 00:04:52.556 "bdev_raid_add_base_bdev", 00:04:52.556 "bdev_raid_delete", 00:04:52.556 "bdev_raid_create", 00:04:52.556 "bdev_raid_get_bdevs", 00:04:52.556 "bdev_error_inject_error", 00:04:52.556 "bdev_error_delete", 00:04:52.556 "bdev_error_create", 00:04:52.556 "bdev_split_delete", 00:04:52.556 "bdev_split_create", 00:04:52.556 "bdev_delay_delete", 00:04:52.556 "bdev_delay_create", 00:04:52.556 "bdev_delay_update_latency", 00:04:52.556 "bdev_zone_block_delete", 00:04:52.556 "bdev_zone_block_create", 00:04:52.556 "blobfs_create", 00:04:52.556 "blobfs_detect", 00:04:52.556 "blobfs_set_cache_size", 00:04:52.556 "bdev_aio_delete", 00:04:52.556 "bdev_aio_rescan", 00:04:52.556 "bdev_aio_create", 00:04:52.556 "bdev_ftl_set_property", 00:04:52.556 "bdev_ftl_get_properties", 00:04:52.556 "bdev_ftl_get_stats", 00:04:52.556 "bdev_ftl_unmap", 00:04:52.556 "bdev_ftl_unload", 00:04:52.556 "bdev_ftl_delete", 00:04:52.556 "bdev_ftl_load", 00:04:52.556 "bdev_ftl_create", 00:04:52.556 "bdev_virtio_attach_controller", 00:04:52.556 "bdev_virtio_scsi_get_devices", 00:04:52.556 "bdev_virtio_detach_controller", 00:04:52.556 "bdev_virtio_blk_set_hotplug", 00:04:52.556 "bdev_iscsi_delete", 00:04:52.556 "bdev_iscsi_create", 00:04:52.556 "bdev_iscsi_set_options", 00:04:52.556 "accel_error_inject_error", 00:04:52.556 "ioat_scan_accel_module", 00:04:52.556 "dsa_scan_accel_module", 00:04:52.556 "iaa_scan_accel_module", 00:04:52.556 "iscsi_set_options", 00:04:52.556 "iscsi_get_auth_groups", 00:04:52.556 "iscsi_auth_group_remove_secret", 00:04:52.556 "iscsi_auth_group_add_secret", 00:04:52.556 "iscsi_delete_auth_group", 00:04:52.556 "iscsi_create_auth_group", 00:04:52.556 "iscsi_set_discovery_auth", 00:04:52.556 "iscsi_get_options", 00:04:52.556 "iscsi_target_node_request_logout", 00:04:52.556 "iscsi_target_node_set_redirect", 00:04:52.556 "iscsi_target_node_set_auth", 00:04:52.556 "iscsi_target_node_add_lun", 00:04:52.556 "iscsi_get_connections", 00:04:52.556 "iscsi_portal_group_set_auth", 00:04:52.556 "iscsi_start_portal_group", 00:04:52.556 "iscsi_delete_portal_group", 00:04:52.556 "iscsi_create_portal_group", 00:04:52.556 "iscsi_get_portal_groups", 00:04:52.556 "iscsi_delete_target_node", 00:04:52.556 "iscsi_target_node_remove_pg_ig_maps", 00:04:52.556 "iscsi_target_node_add_pg_ig_maps", 00:04:52.556 "iscsi_create_target_node", 00:04:52.556 "iscsi_get_target_nodes", 00:04:52.556 "iscsi_delete_initiator_group", 00:04:52.556 "iscsi_initiator_group_remove_initiators", 00:04:52.556 "iscsi_initiator_group_add_initiators", 00:04:52.556 "iscsi_create_initiator_group", 00:04:52.556 "iscsi_get_initiator_groups", 00:04:52.556 "nvmf_set_crdt", 00:04:52.556 "nvmf_set_config", 00:04:52.556 "nvmf_set_max_subsystems", 00:04:52.556 "nvmf_subsystem_get_listeners", 00:04:52.556 "nvmf_subsystem_get_qpairs", 00:04:52.557 "nvmf_subsystem_get_controllers", 00:04:52.557 "nvmf_get_stats", 00:04:52.557 "nvmf_get_transports", 00:04:52.557 "nvmf_create_transport", 00:04:52.557 "nvmf_get_targets", 00:04:52.557 "nvmf_delete_target", 00:04:52.557 "nvmf_create_target", 00:04:52.557 "nvmf_subsystem_allow_any_host", 00:04:52.557 "nvmf_subsystem_remove_host", 00:04:52.557 "nvmf_subsystem_add_host", 00:04:52.557 "nvmf_subsystem_remove_ns", 00:04:52.557 "nvmf_subsystem_add_ns", 00:04:52.557 "nvmf_subsystem_listener_set_ana_state", 00:04:52.557 "nvmf_discovery_get_referrals", 00:04:52.557 "nvmf_discovery_remove_referral", 00:04:52.557 "nvmf_discovery_add_referral", 00:04:52.557 "nvmf_subsystem_remove_listener", 00:04:52.557 "nvmf_subsystem_add_listener", 00:04:52.557 "nvmf_delete_subsystem", 00:04:52.557 "nvmf_create_subsystem", 00:04:52.557 "nvmf_get_subsystems", 00:04:52.557 "env_dpdk_get_mem_stats", 00:04:52.557 "nbd_get_disks", 00:04:52.557 "nbd_stop_disk", 00:04:52.557 "nbd_start_disk", 00:04:52.557 "ublk_recover_disk", 00:04:52.557 "ublk_get_disks", 00:04:52.557 "ublk_stop_disk", 00:04:52.557 "ublk_start_disk", 00:04:52.557 "ublk_destroy_target", 00:04:52.557 "ublk_create_target", 00:04:52.557 "virtio_blk_create_transport", 00:04:52.557 "virtio_blk_get_transports", 00:04:52.557 "vhost_controller_set_coalescing", 00:04:52.557 "vhost_get_controllers", 00:04:52.557 "vhost_delete_controller", 00:04:52.557 "vhost_create_blk_controller", 00:04:52.557 "vhost_scsi_controller_remove_target", 00:04:52.557 "vhost_scsi_controller_add_target", 00:04:52.557 "vhost_start_scsi_controller", 00:04:52.557 "vhost_create_scsi_controller", 00:04:52.557 "thread_set_cpumask", 00:04:52.557 "framework_get_scheduler", 00:04:52.557 "framework_set_scheduler", 00:04:52.557 "framework_get_reactors", 00:04:52.557 "thread_get_io_channels", 00:04:52.557 "thread_get_pollers", 00:04:52.557 "thread_get_stats", 00:04:52.557 "framework_monitor_context_switch", 00:04:52.557 "spdk_kill_instance", 00:04:52.557 "log_enable_timestamps", 00:04:52.557 "log_get_flags", 00:04:52.557 "log_clear_flag", 00:04:52.557 "log_set_flag", 00:04:52.557 "log_get_level", 00:04:52.557 "log_set_level", 00:04:52.557 "log_get_print_level", 00:04:52.557 "log_set_print_level", 00:04:52.557 "framework_enable_cpumask_locks", 00:04:52.557 "framework_disable_cpumask_locks", 00:04:52.557 "framework_wait_init", 00:04:52.557 "framework_start_init", 00:04:52.557 "scsi_get_devices", 00:04:52.557 "bdev_get_histogram", 00:04:52.557 "bdev_enable_histogram", 00:04:52.557 "bdev_set_qos_limit", 00:04:52.557 "bdev_set_qd_sampling_period", 00:04:52.557 "bdev_get_bdevs", 00:04:52.557 "bdev_reset_iostat", 00:04:52.557 "bdev_get_iostat", 00:04:52.557 "bdev_examine", 00:04:52.557 "bdev_wait_for_examine", 00:04:52.557 "bdev_set_options", 00:04:52.557 "notify_get_notifications", 00:04:52.557 "notify_get_types", 00:04:52.557 "accel_get_stats", 00:04:52.557 "accel_set_options", 00:04:52.557 "accel_set_driver", 00:04:52.557 "accel_crypto_key_destroy", 00:04:52.557 "accel_crypto_keys_get", 00:04:52.557 "accel_crypto_key_create", 00:04:52.557 "accel_assign_opc", 00:04:52.557 "accel_get_module_info", 00:04:52.557 "accel_get_opc_assignments", 00:04:52.557 "vmd_rescan", 00:04:52.557 "vmd_remove_device", 00:04:52.557 "vmd_enable", 00:04:52.557 "sock_set_default_impl", 00:04:52.557 "sock_impl_set_options", 00:04:52.557 "sock_impl_get_options", 00:04:52.557 "iobuf_get_stats", 00:04:52.557 "iobuf_set_options", 00:04:52.557 "framework_get_pci_devices", 00:04:52.557 "framework_get_config", 00:04:52.557 "framework_get_subsystems", 00:04:52.557 "trace_get_info", 00:04:52.557 "trace_get_tpoint_group_mask", 00:04:52.557 "trace_disable_tpoint_group", 00:04:52.557 "trace_enable_tpoint_group", 00:04:52.557 "trace_clear_tpoint_mask", 00:04:52.557 "trace_set_tpoint_mask", 00:04:52.557 "spdk_get_version", 00:04:52.557 "rpc_get_methods" 00:04:52.557 ] 00:04:52.557 13:46:43 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:52.557 13:46:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:52.557 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:52.557 13:46:43 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:52.557 13:46:43 -- spdkcli/tcp.sh@38 -- # killprocess 3080248 00:04:52.557 13:46:43 -- common/autotest_common.sh@926 -- # '[' -z 3080248 ']' 00:04:52.557 13:46:43 -- common/autotest_common.sh@930 -- # kill -0 3080248 00:04:52.557 13:46:43 -- common/autotest_common.sh@931 -- # uname 00:04:52.557 13:46:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:52.557 13:46:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3080248 00:04:52.557 13:46:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:52.557 13:46:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:52.557 13:46:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3080248' 00:04:52.557 killing process with pid 3080248 00:04:52.557 13:46:43 -- common/autotest_common.sh@945 -- # kill 3080248 00:04:52.557 13:46:43 -- common/autotest_common.sh@950 -- # wait 3080248 00:04:53.126 00:04:53.126 real 0m1.507s 00:04:53.126 user 0m2.800s 00:04:53.126 sys 0m0.406s 00:04:53.126 13:46:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.126 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.126 ************************************ 00:04:53.126 END TEST spdkcli_tcp 00:04:53.126 ************************************ 00:04:53.126 13:46:43 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.126 13:46:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.126 13:46:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.126 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.126 ************************************ 00:04:53.126 START TEST dpdk_mem_utility 00:04:53.126 ************************************ 00:04:53.126 13:46:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:53.126 * Looking for test storage... 00:04:53.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:53.126 13:46:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:53.126 13:46:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:53.126 13:46:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3080555 00:04:53.126 13:46:43 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3080555 00:04:53.126 13:46:43 -- common/autotest_common.sh@819 -- # '[' -z 3080555 ']' 00:04:53.126 13:46:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.126 13:46:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:53.126 13:46:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.126 13:46:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:53.126 13:46:43 -- common/autotest_common.sh@10 -- # set +x 00:04:53.126 [2024-07-23 13:46:44.022129] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:53.126 [2024-07-23 13:46:44.022178] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080555 ] 00:04:53.126 EAL: No free 2048 kB hugepages reported on node 1 00:04:53.126 [2024-07-23 13:46:44.074577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.386 [2024-07-23 13:46:44.153848] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:53.386 [2024-07-23 13:46:44.153958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.955 13:46:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:53.955 13:46:44 -- common/autotest_common.sh@852 -- # return 0 00:04:53.955 13:46:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:53.955 13:46:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:53.955 13:46:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:53.955 13:46:44 -- common/autotest_common.sh@10 -- # set +x 00:04:53.955 { 00:04:53.955 "filename": "/tmp/spdk_mem_dump.txt" 00:04:53.955 } 00:04:53.955 13:46:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:53.955 13:46:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:53.955 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:53.955 1 heaps totaling size 814.000000 MiB 00:04:53.955 size: 814.000000 MiB heap id: 0 00:04:53.955 end heaps---------- 00:04:53.955 8 mempools totaling size 598.116089 MiB 00:04:53.955 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:53.955 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:53.955 size: 84.521057 MiB name: bdev_io_3080555 00:04:53.955 size: 51.011292 MiB name: evtpool_3080555 00:04:53.955 size: 50.003479 MiB name: msgpool_3080555 00:04:53.955 size: 21.763794 MiB name: PDU_Pool 00:04:53.955 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:53.955 size: 0.026123 MiB name: Session_Pool 00:04:53.955 end mempools------- 00:04:53.955 6 memzones totaling size 4.142822 MiB 00:04:53.955 size: 1.000366 MiB name: RG_ring_0_3080555 00:04:53.955 size: 1.000366 MiB name: RG_ring_1_3080555 00:04:53.955 size: 1.000366 MiB name: RG_ring_4_3080555 00:04:53.955 size: 1.000366 MiB name: RG_ring_5_3080555 00:04:53.955 size: 0.125366 MiB name: RG_ring_2_3080555 00:04:53.955 size: 0.015991 MiB name: RG_ring_3_3080555 00:04:53.955 end memzones------- 00:04:53.955 13:46:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:53.955 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:53.955 list of free elements. size: 12.519348 MiB 00:04:53.955 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:53.955 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:53.955 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:53.955 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:53.955 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:53.955 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:53.955 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:53.955 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:53.955 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:53.955 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:53.955 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:53.955 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:53.955 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:53.955 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:53.955 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:53.955 list of standard malloc elements. size: 199.218079 MiB 00:04:53.955 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:53.955 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:53.955 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:53.955 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:53.955 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:53.955 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:53.955 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:53.955 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:53.955 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:53.955 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:53.955 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:53.955 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:53.955 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:53.955 list of memzone associated elements. size: 602.262573 MiB 00:04:53.955 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:53.955 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:53.955 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:53.956 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:53.956 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:53.956 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3080555_0 00:04:53.956 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:53.956 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3080555_0 00:04:53.956 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:53.956 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3080555_0 00:04:53.956 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:53.956 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:53.956 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:53.956 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:53.956 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:53.956 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3080555 00:04:53.956 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:53.956 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3080555 00:04:53.956 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:53.956 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3080555 00:04:53.956 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:53.956 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:53.956 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:53.956 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:53.956 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:53.956 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:53.956 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:53.956 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:53.956 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:53.956 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3080555 00:04:53.956 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:53.956 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3080555 00:04:53.956 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:53.956 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3080555 00:04:53.956 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:53.956 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3080555 00:04:53.956 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:53.956 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3080555 00:04:53.956 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:53.956 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:53.956 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:53.956 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:53.956 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:53.956 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:53.956 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:53.956 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3080555 00:04:53.956 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:53.956 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:53.956 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:53.956 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:53.956 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:53.956 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3080555 00:04:53.956 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:53.956 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:53.956 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:53.956 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3080555 00:04:53.956 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:53.956 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3080555 00:04:53.956 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:53.956 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:53.956 13:46:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:53.956 13:46:44 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3080555 00:04:53.956 13:46:44 -- common/autotest_common.sh@926 -- # '[' -z 3080555 ']' 00:04:53.956 13:46:44 -- common/autotest_common.sh@930 -- # kill -0 3080555 00:04:53.956 13:46:44 -- common/autotest_common.sh@931 -- # uname 00:04:53.956 13:46:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.956 13:46:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3080555 00:04:53.956 13:46:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.956 13:46:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.956 13:46:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3080555' 00:04:53.956 killing process with pid 3080555 00:04:53.956 13:46:44 -- common/autotest_common.sh@945 -- # kill 3080555 00:04:53.956 13:46:44 -- common/autotest_common.sh@950 -- # wait 3080555 00:04:54.526 00:04:54.526 real 0m1.379s 00:04:54.526 user 0m1.439s 00:04:54.526 sys 0m0.366s 00:04:54.526 13:46:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.526 13:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.526 ************************************ 00:04:54.526 END TEST dpdk_mem_utility 00:04:54.526 ************************************ 00:04:54.526 13:46:45 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:54.526 13:46:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.526 13:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.526 13:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.526 ************************************ 00:04:54.526 START TEST event 00:04:54.526 ************************************ 00:04:54.526 13:46:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:54.526 * Looking for test storage... 00:04:54.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:54.528 13:46:45 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:54.528 13:46:45 -- bdev/nbd_common.sh@6 -- # set -e 00:04:54.528 13:46:45 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.528 13:46:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:54.528 13:46:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.528 13:46:45 -- common/autotest_common.sh@10 -- # set +x 00:04:54.528 ************************************ 00:04:54.528 START TEST event_perf 00:04:54.528 ************************************ 00:04:54.528 13:46:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:54.528 Running I/O for 1 seconds...[2024-07-23 13:46:45.427084] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:54.528 [2024-07-23 13:46:45.427142] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080839 ] 00:04:54.528 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.528 [2024-07-23 13:46:45.480178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:54.787 [2024-07-23 13:46:45.553491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.787 [2024-07-23 13:46:45.553585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.787 [2024-07-23 13:46:45.553676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.787 [2024-07-23 13:46:45.553677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.751 Running I/O for 1 seconds... 00:04:55.751 lcore 0: 202670 00:04:55.751 lcore 1: 202670 00:04:55.751 lcore 2: 202670 00:04:55.751 lcore 3: 202671 00:04:55.751 done. 00:04:55.751 00:04:55.751 real 0m1.231s 00:04:55.751 user 0m4.157s 00:04:55.751 sys 0m0.071s 00:04:55.751 13:46:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.751 13:46:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.751 ************************************ 00:04:55.751 END TEST event_perf 00:04:55.751 ************************************ 00:04:55.751 13:46:46 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:55.751 13:46:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:55.751 13:46:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:55.751 13:46:46 -- common/autotest_common.sh@10 -- # set +x 00:04:55.751 ************************************ 00:04:55.752 START TEST event_reactor 00:04:55.752 ************************************ 00:04:55.752 13:46:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:55.752 [2024-07-23 13:46:46.700183] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:55.752 [2024-07-23 13:46:46.700257] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081092 ] 00:04:55.752 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.752 [2024-07-23 13:46:46.756630] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.023 [2024-07-23 13:46:46.827516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.956 test_start 00:04:56.956 oneshot 00:04:56.956 tick 100 00:04:56.956 tick 100 00:04:56.956 tick 250 00:04:56.956 tick 100 00:04:56.956 tick 100 00:04:56.956 tick 250 00:04:56.956 tick 100 00:04:56.956 tick 500 00:04:56.956 tick 100 00:04:56.956 tick 100 00:04:56.956 tick 250 00:04:56.956 tick 100 00:04:56.956 tick 100 00:04:56.956 test_end 00:04:56.956 00:04:56.956 real 0m1.236s 00:04:56.956 user 0m1.152s 00:04:56.956 sys 0m0.079s 00:04:56.956 13:46:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.956 13:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.956 ************************************ 00:04:56.956 END TEST event_reactor 00:04:56.956 ************************************ 00:04:56.956 13:46:47 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.956 13:46:47 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:56.956 13:46:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.956 13:46:47 -- common/autotest_common.sh@10 -- # set +x 00:04:56.956 ************************************ 00:04:56.956 START TEST event_reactor_perf 00:04:56.956 ************************************ 00:04:56.956 13:46:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.956 [2024-07-23 13:46:47.970687] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:56.956 [2024-07-23 13:46:47.970763] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081352 ] 00:04:57.215 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.215 [2024-07-23 13:46:48.028834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.215 [2024-07-23 13:46:48.096589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.591 test_start 00:04:58.591 test_end 00:04:58.591 Performance: 493901 events per second 00:04:58.591 00:04:58.591 real 0m1.235s 00:04:58.591 user 0m1.162s 00:04:58.591 sys 0m0.068s 00:04:58.591 13:46:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.591 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 ************************************ 00:04:58.591 END TEST event_reactor_perf 00:04:58.591 ************************************ 00:04:58.591 13:46:49 -- event/event.sh@49 -- # uname -s 00:04:58.591 13:46:49 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:58.591 13:46:49 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:58.591 13:46:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.591 13:46:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.591 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 ************************************ 00:04:58.591 START TEST event_scheduler 00:04:58.591 ************************************ 00:04:58.591 13:46:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:58.591 * Looking for test storage... 00:04:58.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:58.591 13:46:49 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:58.591 13:46:49 -- scheduler/scheduler.sh@35 -- # scheduler_pid=3081628 00:04:58.591 13:46:49 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:58.591 13:46:49 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:58.591 13:46:49 -- scheduler/scheduler.sh@37 -- # waitforlisten 3081628 00:04:58.591 13:46:49 -- common/autotest_common.sh@819 -- # '[' -z 3081628 ']' 00:04:58.591 13:46:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.591 13:46:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.591 13:46:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.591 13:46:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.591 13:46:49 -- common/autotest_common.sh@10 -- # set +x 00:04:58.591 [2024-07-23 13:46:49.337835] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:04:58.591 [2024-07-23 13:46:49.337887] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3081628 ] 00:04:58.591 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.592 [2024-07-23 13:46:49.393158] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:58.592 [2024-07-23 13:46:49.466346] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.592 [2024-07-23 13:46:49.466433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.592 [2024-07-23 13:46:49.466529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:58.592 [2024-07-23 13:46:49.466530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.158 13:46:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.158 13:46:50 -- common/autotest_common.sh@852 -- # return 0 00:04:59.158 13:46:50 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:59.158 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.158 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.158 POWER: Env isn't set yet! 00:04:59.158 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:59.158 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:59.158 POWER: Cannot set governor of lcore 0 to userspace 00:04:59.158 POWER: Attempting to initialise PSTAT power management... 00:04:59.158 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:59.158 POWER: Initialized successfully for lcore 0 power management 00:04:59.418 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:59.418 POWER: Initialized successfully for lcore 1 power management 00:04:59.418 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:59.418 POWER: Initialized successfully for lcore 2 power management 00:04:59.418 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:59.418 POWER: Initialized successfully for lcore 3 power management 00:04:59.418 [2024-07-23 13:46:50.193177] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:59.418 [2024-07-23 13:46:50.193195] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:59.418 [2024-07-23 13:46:50.193203] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 [2024-07-23 13:46:50.266399] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:59.418 13:46:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.418 13:46:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 ************************************ 00:04:59.418 START TEST scheduler_create_thread 00:04:59.418 ************************************ 00:04:59.418 13:46:50 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 2 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 3 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 4 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 5 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.418 6 00:04:59.418 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.418 13:46:50 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:59.418 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.418 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 7 00:04:59.419 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.419 13:46:50 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:59.419 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.419 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 8 00:04:59.419 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.419 13:46:50 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:59.419 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.419 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 9 00:04:59.419 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.419 13:46:50 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:59.419 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.419 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 10 00:04:59.419 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.419 13:46:50 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:59.419 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.419 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:04:59.419 13:46:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:59.419 13:46:50 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:59.419 13:46:50 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:59.419 13:46:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:59.419 13:46:50 -- common/autotest_common.sh@10 -- # set +x 00:05:00.354 13:46:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:00.354 13:46:51 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:00.354 13:46:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.354 13:46:51 -- common/autotest_common.sh@10 -- # set +x 00:05:01.730 13:46:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:01.730 13:46:52 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:01.730 13:46:52 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:01.730 13:46:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:01.730 13:46:52 -- common/autotest_common.sh@10 -- # set +x 00:05:02.668 13:46:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:02.668 00:05:02.668 real 0m3.380s 00:05:02.668 user 0m0.024s 00:05:02.668 sys 0m0.004s 00:05:02.668 13:46:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.668 13:46:53 -- common/autotest_common.sh@10 -- # set +x 00:05:02.668 ************************************ 00:05:02.668 END TEST scheduler_create_thread 00:05:02.668 ************************************ 00:05:02.927 13:46:53 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:02.927 13:46:53 -- scheduler/scheduler.sh@46 -- # killprocess 3081628 00:05:02.927 13:46:53 -- common/autotest_common.sh@926 -- # '[' -z 3081628 ']' 00:05:02.927 13:46:53 -- common/autotest_common.sh@930 -- # kill -0 3081628 00:05:02.927 13:46:53 -- common/autotest_common.sh@931 -- # uname 00:05:02.927 13:46:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:02.927 13:46:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3081628 00:05:02.927 13:46:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:02.927 13:46:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:02.927 13:46:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3081628' 00:05:02.927 killing process with pid 3081628 00:05:02.927 13:46:53 -- common/autotest_common.sh@945 -- # kill 3081628 00:05:02.927 13:46:53 -- common/autotest_common.sh@950 -- # wait 3081628 00:05:03.185 [2024-07-23 13:46:54.034407] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:03.185 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:03.185 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:03.186 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:03.186 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:03.186 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:03.186 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:03.186 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:03.186 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:03.445 00:05:03.445 real 0m5.067s 00:05:03.445 user 0m10.477s 00:05:03.445 sys 0m0.330s 00:05:03.445 13:46:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.445 13:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 ************************************ 00:05:03.445 END TEST event_scheduler 00:05:03.445 ************************************ 00:05:03.445 13:46:54 -- event/event.sh@51 -- # modprobe -n nbd 00:05:03.445 13:46:54 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:03.445 13:46:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:03.445 13:46:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.445 13:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 ************************************ 00:05:03.445 START TEST app_repeat 00:05:03.445 ************************************ 00:05:03.445 13:46:54 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:03.445 13:46:54 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.445 13:46:54 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.445 13:46:54 -- event/event.sh@13 -- # local nbd_list 00:05:03.445 13:46:54 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.445 13:46:54 -- event/event.sh@14 -- # local bdev_list 00:05:03.445 13:46:54 -- event/event.sh@15 -- # local repeat_times=4 00:05:03.445 13:46:54 -- event/event.sh@17 -- # modprobe nbd 00:05:03.445 13:46:54 -- event/event.sh@19 -- # repeat_pid=3082605 00:05:03.445 13:46:54 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.445 13:46:54 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:03.445 13:46:54 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3082605' 00:05:03.445 Process app_repeat pid: 3082605 00:05:03.445 13:46:54 -- event/event.sh@23 -- # for i in {0..2} 00:05:03.445 13:46:54 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:03.445 spdk_app_start Round 0 00:05:03.445 13:46:54 -- event/event.sh@25 -- # waitforlisten 3082605 /var/tmp/spdk-nbd.sock 00:05:03.445 13:46:54 -- common/autotest_common.sh@819 -- # '[' -z 3082605 ']' 00:05:03.445 13:46:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.445 13:46:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:03.445 13:46:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.445 13:46:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:03.445 13:46:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.445 [2024-07-23 13:46:54.365218] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:03.445 [2024-07-23 13:46:54.365278] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082605 ] 00:05:03.445 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.445 [2024-07-23 13:46:54.421009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:03.705 [2024-07-23 13:46:54.491707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.705 [2024-07-23 13:46:54.491709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.272 13:46:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:04.272 13:46:55 -- common/autotest_common.sh@852 -- # return 0 00:05:04.272 13:46:55 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.531 Malloc0 00:05:04.531 13:46:55 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:04.531 Malloc1 00:05:04.531 13:46:55 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.531 13:46:55 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@12 -- # local i 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.532 13:46:55 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:04.790 /dev/nbd0 00:05:04.790 13:46:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:04.790 13:46:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:04.790 13:46:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:04.790 13:46:55 -- common/autotest_common.sh@857 -- # local i 00:05:04.790 13:46:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:04.790 13:46:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:04.790 13:46:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:04.790 13:46:55 -- common/autotest_common.sh@861 -- # break 00:05:04.790 13:46:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:04.790 13:46:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:04.790 13:46:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.790 1+0 records in 00:05:04.790 1+0 records out 00:05:04.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195871 s, 20.9 MB/s 00:05:04.790 13:46:55 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.790 13:46:55 -- common/autotest_common.sh@874 -- # size=4096 00:05:04.790 13:46:55 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.790 13:46:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:04.790 13:46:55 -- common/autotest_common.sh@877 -- # return 0 00:05:04.790 13:46:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.790 13:46:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.790 13:46:55 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:05.050 /dev/nbd1 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:05.050 13:46:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:05.050 13:46:55 -- common/autotest_common.sh@857 -- # local i 00:05:05.050 13:46:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:05.050 13:46:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:05.050 13:46:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:05.050 13:46:55 -- common/autotest_common.sh@861 -- # break 00:05:05.050 13:46:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:05.050 13:46:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:05.050 13:46:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:05.050 1+0 records in 00:05:05.050 1+0 records out 00:05:05.050 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00023098 s, 17.7 MB/s 00:05:05.050 13:46:55 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.050 13:46:55 -- common/autotest_common.sh@874 -- # size=4096 00:05:05.050 13:46:55 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:05.050 13:46:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:05.050 13:46:55 -- common/autotest_common.sh@877 -- # return 0 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.050 13:46:55 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:05.308 { 00:05:05.308 "nbd_device": "/dev/nbd0", 00:05:05.308 "bdev_name": "Malloc0" 00:05:05.308 }, 00:05:05.308 { 00:05:05.308 "nbd_device": "/dev/nbd1", 00:05:05.308 "bdev_name": "Malloc1" 00:05:05.308 } 00:05:05.308 ]' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:05.308 { 00:05:05.308 "nbd_device": "/dev/nbd0", 00:05:05.308 "bdev_name": "Malloc0" 00:05:05.308 }, 00:05:05.308 { 00:05:05.308 "nbd_device": "/dev/nbd1", 00:05:05.308 "bdev_name": "Malloc1" 00:05:05.308 } 00:05:05.308 ]' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:05.308 /dev/nbd1' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:05.308 /dev/nbd1' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@65 -- # count=2 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@95 -- # count=2 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:05.308 256+0 records in 00:05:05.308 256+0 records out 00:05:05.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010356 s, 101 MB/s 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:05.308 256+0 records in 00:05:05.308 256+0 records out 00:05:05.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137311 s, 76.4 MB/s 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:05.308 256+0 records in 00:05:05.308 256+0 records out 00:05:05.308 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145116 s, 72.3 MB/s 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@51 -- # local i 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.308 13:46:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@41 -- # break 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@41 -- # break 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.566 13:46:56 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@65 -- # true 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.824 13:46:56 -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.824 13:46:56 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:06.083 13:46:56 -- event/event.sh@35 -- # sleep 3 00:05:06.341 [2024-07-23 13:46:57.190290] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.341 [2024-07-23 13:46:57.254896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.341 [2024-07-23 13:46:57.254898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.341 [2024-07-23 13:46:57.296088] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.341 [2024-07-23 13:46:57.296128] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:09.629 13:46:59 -- event/event.sh@23 -- # for i in {0..2} 00:05:09.629 13:46:59 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:09.629 spdk_app_start Round 1 00:05:09.629 13:46:59 -- event/event.sh@25 -- # waitforlisten 3082605 /var/tmp/spdk-nbd.sock 00:05:09.629 13:46:59 -- common/autotest_common.sh@819 -- # '[' -z 3082605 ']' 00:05:09.629 13:46:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:09.629 13:46:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:09.629 13:46:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:09.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:09.629 13:46:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:09.629 13:46:59 -- common/autotest_common.sh@10 -- # set +x 00:05:09.629 13:47:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.629 13:47:00 -- common/autotest_common.sh@852 -- # return 0 00:05:09.629 13:47:00 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.629 Malloc0 00:05:09.629 13:47:00 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:09.629 Malloc1 00:05:09.629 13:47:00 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:09.629 13:47:00 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@12 -- # local i 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.630 13:47:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:09.889 /dev/nbd0 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:09.889 13:47:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:09.889 13:47:00 -- common/autotest_common.sh@857 -- # local i 00:05:09.889 13:47:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:09.889 13:47:00 -- common/autotest_common.sh@861 -- # break 00:05:09.889 13:47:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.889 1+0 records in 00:05:09.889 1+0 records out 00:05:09.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158559 s, 25.8 MB/s 00:05:09.889 13:47:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.889 13:47:00 -- common/autotest_common.sh@874 -- # size=4096 00:05:09.889 13:47:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.889 13:47:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:09.889 13:47:00 -- common/autotest_common.sh@877 -- # return 0 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:09.889 /dev/nbd1 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:09.889 13:47:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:09.889 13:47:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:09.889 13:47:00 -- common/autotest_common.sh@857 -- # local i 00:05:09.889 13:47:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:09.889 13:47:00 -- common/autotest_common.sh@861 -- # break 00:05:09.889 13:47:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:09.889 13:47:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:09.889 1+0 records in 00:05:09.889 1+0 records out 00:05:09.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000126139 s, 32.5 MB/s 00:05:09.889 13:47:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.889 13:47:00 -- common/autotest_common.sh@874 -- # size=4096 00:05:09.890 13:47:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:09.890 13:47:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:09.890 13:47:00 -- common/autotest_common.sh@877 -- # return 0 00:05:09.890 13:47:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:09.890 13:47:00 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:09.890 13:47:00 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.890 13:47:00 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.890 13:47:00 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:10.282 { 00:05:10.282 "nbd_device": "/dev/nbd0", 00:05:10.282 "bdev_name": "Malloc0" 00:05:10.282 }, 00:05:10.282 { 00:05:10.282 "nbd_device": "/dev/nbd1", 00:05:10.282 "bdev_name": "Malloc1" 00:05:10.282 } 00:05:10.282 ]' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:10.282 { 00:05:10.282 "nbd_device": "/dev/nbd0", 00:05:10.282 "bdev_name": "Malloc0" 00:05:10.282 }, 00:05:10.282 { 00:05:10.282 "nbd_device": "/dev/nbd1", 00:05:10.282 "bdev_name": "Malloc1" 00:05:10.282 } 00:05:10.282 ]' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:10.282 /dev/nbd1' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:10.282 /dev/nbd1' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@65 -- # count=2 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@95 -- # count=2 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:10.282 13:47:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:10.282 256+0 records in 00:05:10.282 256+0 records out 00:05:10.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104003 s, 101 MB/s 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:10.283 256+0 records in 00:05:10.283 256+0 records out 00:05:10.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0132809 s, 79.0 MB/s 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:10.283 256+0 records in 00:05:10.283 256+0 records out 00:05:10.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147791 s, 71.0 MB/s 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@51 -- # local i 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.283 13:47:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@41 -- # break 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@41 -- # break 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@45 -- # return 0 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:10.542 13:47:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@65 -- # true 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@65 -- # count=0 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@104 -- # count=0 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:10.801 13:47:01 -- bdev/nbd_common.sh@109 -- # return 0 00:05:10.801 13:47:01 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:11.061 13:47:01 -- event/event.sh@35 -- # sleep 3 00:05:11.321 [2024-07-23 13:47:02.152115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:11.321 [2024-07-23 13:47:02.217540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.321 [2024-07-23 13:47:02.217541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.321 [2024-07-23 13:47:02.258909] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.321 [2024-07-23 13:47:02.258952] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:14.611 13:47:04 -- event/event.sh@23 -- # for i in {0..2} 00:05:14.611 13:47:04 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:14.611 spdk_app_start Round 2 00:05:14.611 13:47:04 -- event/event.sh@25 -- # waitforlisten 3082605 /var/tmp/spdk-nbd.sock 00:05:14.611 13:47:04 -- common/autotest_common.sh@819 -- # '[' -z 3082605 ']' 00:05:14.611 13:47:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:14.611 13:47:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:14.612 13:47:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:14.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:14.612 13:47:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:14.612 13:47:04 -- common/autotest_common.sh@10 -- # set +x 00:05:14.612 13:47:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:14.612 13:47:05 -- common/autotest_common.sh@852 -- # return 0 00:05:14.612 13:47:05 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.612 Malloc0 00:05:14.612 13:47:05 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:14.612 Malloc1 00:05:14.612 13:47:05 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@12 -- # local i 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.612 13:47:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.612 /dev/nbd0 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.871 13:47:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:14.871 13:47:05 -- common/autotest_common.sh@857 -- # local i 00:05:14.871 13:47:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:14.871 13:47:05 -- common/autotest_common.sh@861 -- # break 00:05:14.871 13:47:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.871 1+0 records in 00:05:14.871 1+0 records out 00:05:14.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222461 s, 18.4 MB/s 00:05:14.871 13:47:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.871 13:47:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:14.871 13:47:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.871 13:47:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:14.871 13:47:05 -- common/autotest_common.sh@877 -- # return 0 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.871 /dev/nbd1 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.871 13:47:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:14.871 13:47:05 -- common/autotest_common.sh@857 -- # local i 00:05:14.871 13:47:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:14.871 13:47:05 -- common/autotest_common.sh@861 -- # break 00:05:14.871 13:47:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:14.871 13:47:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.871 1+0 records in 00:05:14.871 1+0 records out 00:05:14.871 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215678 s, 19.0 MB/s 00:05:14.871 13:47:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.871 13:47:05 -- common/autotest_common.sh@874 -- # size=4096 00:05:14.871 13:47:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:14.871 13:47:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:14.871 13:47:05 -- common/autotest_common.sh@877 -- # return 0 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.871 13:47:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:15.131 { 00:05:15.131 "nbd_device": "/dev/nbd0", 00:05:15.131 "bdev_name": "Malloc0" 00:05:15.131 }, 00:05:15.131 { 00:05:15.131 "nbd_device": "/dev/nbd1", 00:05:15.131 "bdev_name": "Malloc1" 00:05:15.131 } 00:05:15.131 ]' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:15.131 { 00:05:15.131 "nbd_device": "/dev/nbd0", 00:05:15.131 "bdev_name": "Malloc0" 00:05:15.131 }, 00:05:15.131 { 00:05:15.131 "nbd_device": "/dev/nbd1", 00:05:15.131 "bdev_name": "Malloc1" 00:05:15.131 } 00:05:15.131 ]' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:15.131 /dev/nbd1' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:15.131 /dev/nbd1' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@65 -- # count=2 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@95 -- # count=2 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:15.131 256+0 records in 00:05:15.131 256+0 records out 00:05:15.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102953 s, 102 MB/s 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:15.131 256+0 records in 00:05:15.131 256+0 records out 00:05:15.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0135696 s, 77.3 MB/s 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:15.131 256+0 records in 00:05:15.131 256+0 records out 00:05:15.131 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014808 s, 70.8 MB/s 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.131 13:47:06 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@51 -- # local i 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.132 13:47:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@41 -- # break 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:15.391 13:47:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@41 -- # break 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.650 13:47:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@65 -- # true 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.910 13:47:06 -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.910 13:47:06 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.910 13:47:06 -- event/event.sh@35 -- # sleep 3 00:05:16.172 [2024-07-23 13:47:07.115799] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.172 [2024-07-23 13:47:07.181850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.172 [2024-07-23 13:47:07.181852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.432 [2024-07-23 13:47:07.223007] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:16.432 [2024-07-23 13:47:07.223056] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.967 13:47:09 -- event/event.sh@38 -- # waitforlisten 3082605 /var/tmp/spdk-nbd.sock 00:05:18.967 13:47:09 -- common/autotest_common.sh@819 -- # '[' -z 3082605 ']' 00:05:18.967 13:47:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.967 13:47:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:18.967 13:47:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.967 13:47:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:18.967 13:47:09 -- common/autotest_common.sh@10 -- # set +x 00:05:19.226 13:47:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:19.226 13:47:10 -- common/autotest_common.sh@852 -- # return 0 00:05:19.226 13:47:10 -- event/event.sh@39 -- # killprocess 3082605 00:05:19.226 13:47:10 -- common/autotest_common.sh@926 -- # '[' -z 3082605 ']' 00:05:19.226 13:47:10 -- common/autotest_common.sh@930 -- # kill -0 3082605 00:05:19.226 13:47:10 -- common/autotest_common.sh@931 -- # uname 00:05:19.226 13:47:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:19.226 13:47:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3082605 00:05:19.226 13:47:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:19.226 13:47:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:19.226 13:47:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3082605' 00:05:19.226 killing process with pid 3082605 00:05:19.226 13:47:10 -- common/autotest_common.sh@945 -- # kill 3082605 00:05:19.226 13:47:10 -- common/autotest_common.sh@950 -- # wait 3082605 00:05:19.484 spdk_app_start is called in Round 0. 00:05:19.484 Shutdown signal received, stop current app iteration 00:05:19.484 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:05:19.484 spdk_app_start is called in Round 1. 00:05:19.484 Shutdown signal received, stop current app iteration 00:05:19.484 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:05:19.484 spdk_app_start is called in Round 2. 00:05:19.484 Shutdown signal received, stop current app iteration 00:05:19.484 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:05:19.484 spdk_app_start is called in Round 3. 00:05:19.484 Shutdown signal received, stop current app iteration 00:05:19.484 13:47:10 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:19.484 13:47:10 -- event/event.sh@42 -- # return 0 00:05:19.484 00:05:19.484 real 0m15.979s 00:05:19.484 user 0m34.538s 00:05:19.484 sys 0m2.282s 00:05:19.484 13:47:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.484 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.484 ************************************ 00:05:19.484 END TEST app_repeat 00:05:19.484 ************************************ 00:05:19.484 13:47:10 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:19.484 13:47:10 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:19.484 13:47:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.484 13:47:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.484 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.484 ************************************ 00:05:19.485 START TEST cpu_locks 00:05:19.485 ************************************ 00:05:19.485 13:47:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:19.485 * Looking for test storage... 00:05:19.485 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:19.485 13:47:10 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:19.485 13:47:10 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:19.485 13:47:10 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:19.485 13:47:10 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:19.485 13:47:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.485 13:47:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.485 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.485 ************************************ 00:05:19.485 START TEST default_locks 00:05:19.485 ************************************ 00:05:19.485 13:47:10 -- common/autotest_common.sh@1104 -- # default_locks 00:05:19.485 13:47:10 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3085499 00:05:19.485 13:47:10 -- event/cpu_locks.sh@47 -- # waitforlisten 3085499 00:05:19.485 13:47:10 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.485 13:47:10 -- common/autotest_common.sh@819 -- # '[' -z 3085499 ']' 00:05:19.485 13:47:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.485 13:47:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:19.485 13:47:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.485 13:47:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:19.485 13:47:10 -- common/autotest_common.sh@10 -- # set +x 00:05:19.485 [2024-07-23 13:47:10.486218] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:19.485 [2024-07-23 13:47:10.486274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085499 ] 00:05:19.743 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.743 [2024-07-23 13:47:10.541102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.743 [2024-07-23 13:47:10.619808] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:19.743 [2024-07-23 13:47:10.619921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.311 13:47:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:20.311 13:47:11 -- common/autotest_common.sh@852 -- # return 0 00:05:20.311 13:47:11 -- event/cpu_locks.sh@49 -- # locks_exist 3085499 00:05:20.311 13:47:11 -- event/cpu_locks.sh@22 -- # lslocks -p 3085499 00:05:20.311 13:47:11 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.571 lslocks: write error 00:05:20.571 13:47:11 -- event/cpu_locks.sh@50 -- # killprocess 3085499 00:05:20.571 13:47:11 -- common/autotest_common.sh@926 -- # '[' -z 3085499 ']' 00:05:20.571 13:47:11 -- common/autotest_common.sh@930 -- # kill -0 3085499 00:05:20.571 13:47:11 -- common/autotest_common.sh@931 -- # uname 00:05:20.571 13:47:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:20.571 13:47:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3085499 00:05:20.571 13:47:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:20.571 13:47:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:20.571 13:47:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3085499' 00:05:20.571 killing process with pid 3085499 00:05:20.571 13:47:11 -- common/autotest_common.sh@945 -- # kill 3085499 00:05:20.571 13:47:11 -- common/autotest_common.sh@950 -- # wait 3085499 00:05:21.138 13:47:11 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3085499 00:05:21.138 13:47:11 -- common/autotest_common.sh@640 -- # local es=0 00:05:21.138 13:47:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3085499 00:05:21.138 13:47:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:21.138 13:47:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:21.138 13:47:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:21.138 13:47:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:21.138 13:47:11 -- common/autotest_common.sh@643 -- # waitforlisten 3085499 00:05:21.138 13:47:11 -- common/autotest_common.sh@819 -- # '[' -z 3085499 ']' 00:05:21.138 13:47:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.138 13:47:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.138 13:47:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.138 13:47:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.138 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:05:21.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3085499) - No such process 00:05:21.138 ERROR: process (pid: 3085499) is no longer running 00:05:21.138 13:47:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:21.138 13:47:11 -- common/autotest_common.sh@852 -- # return 1 00:05:21.138 13:47:11 -- common/autotest_common.sh@643 -- # es=1 00:05:21.138 13:47:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:21.138 13:47:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:21.138 13:47:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:21.138 13:47:11 -- event/cpu_locks.sh@54 -- # no_locks 00:05:21.138 13:47:11 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:21.138 13:47:11 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:21.138 13:47:11 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:21.138 00:05:21.138 real 0m1.494s 00:05:21.138 user 0m1.545s 00:05:21.138 sys 0m0.481s 00:05:21.138 13:47:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.138 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:05:21.138 ************************************ 00:05:21.138 END TEST default_locks 00:05:21.138 ************************************ 00:05:21.139 13:47:11 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:21.139 13:47:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.139 13:47:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.139 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:05:21.139 ************************************ 00:05:21.139 START TEST default_locks_via_rpc 00:05:21.139 ************************************ 00:05:21.139 13:47:11 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:05:21.139 13:47:11 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3085858 00:05:21.139 13:47:11 -- event/cpu_locks.sh@63 -- # waitforlisten 3085858 00:05:21.139 13:47:11 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:21.139 13:47:11 -- common/autotest_common.sh@819 -- # '[' -z 3085858 ']' 00:05:21.139 13:47:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.139 13:47:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.139 13:47:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.139 13:47:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.139 13:47:11 -- common/autotest_common.sh@10 -- # set +x 00:05:21.139 [2024-07-23 13:47:12.019503] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:21.139 [2024-07-23 13:47:12.019554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3085858 ] 00:05:21.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.139 [2024-07-23 13:47:12.071690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.139 [2024-07-23 13:47:12.148222] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:21.139 [2024-07-23 13:47:12.148337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.109 13:47:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:22.109 13:47:12 -- common/autotest_common.sh@852 -- # return 0 00:05:22.109 13:47:12 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:22.109 13:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.109 13:47:12 -- common/autotest_common.sh@10 -- # set +x 00:05:22.109 13:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.109 13:47:12 -- event/cpu_locks.sh@67 -- # no_locks 00:05:22.109 13:47:12 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:22.109 13:47:12 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:22.109 13:47:12 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:22.109 13:47:12 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:22.109 13:47:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:22.109 13:47:12 -- common/autotest_common.sh@10 -- # set +x 00:05:22.109 13:47:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:22.109 13:47:12 -- event/cpu_locks.sh@71 -- # locks_exist 3085858 00:05:22.109 13:47:12 -- event/cpu_locks.sh@22 -- # lslocks -p 3085858 00:05:22.109 13:47:12 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.367 13:47:13 -- event/cpu_locks.sh@73 -- # killprocess 3085858 00:05:22.367 13:47:13 -- common/autotest_common.sh@926 -- # '[' -z 3085858 ']' 00:05:22.367 13:47:13 -- common/autotest_common.sh@930 -- # kill -0 3085858 00:05:22.367 13:47:13 -- common/autotest_common.sh@931 -- # uname 00:05:22.367 13:47:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:22.367 13:47:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3085858 00:05:22.367 13:47:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:22.367 13:47:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:22.367 13:47:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3085858' 00:05:22.367 killing process with pid 3085858 00:05:22.367 13:47:13 -- common/autotest_common.sh@945 -- # kill 3085858 00:05:22.367 13:47:13 -- common/autotest_common.sh@950 -- # wait 3085858 00:05:22.624 00:05:22.624 real 0m1.563s 00:05:22.624 user 0m1.641s 00:05:22.624 sys 0m0.491s 00:05:22.624 13:47:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.624 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:22.624 ************************************ 00:05:22.624 END TEST default_locks_via_rpc 00:05:22.624 ************************************ 00:05:22.624 13:47:13 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:22.624 13:47:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.624 13:47:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.624 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:22.624 ************************************ 00:05:22.624 START TEST non_locking_app_on_locked_coremask 00:05:22.624 ************************************ 00:05:22.624 13:47:13 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:05:22.624 13:47:13 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3086165 00:05:22.624 13:47:13 -- event/cpu_locks.sh@81 -- # waitforlisten 3086165 /var/tmp/spdk.sock 00:05:22.624 13:47:13 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.624 13:47:13 -- common/autotest_common.sh@819 -- # '[' -z 3086165 ']' 00:05:22.624 13:47:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.624 13:47:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:22.624 13:47:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.624 13:47:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:22.624 13:47:13 -- common/autotest_common.sh@10 -- # set +x 00:05:22.624 [2024-07-23 13:47:13.620755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:22.624 [2024-07-23 13:47:13.620803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086165 ] 00:05:22.881 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.881 [2024-07-23 13:47:13.673342] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.881 [2024-07-23 13:47:13.750449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.881 [2024-07-23 13:47:13.750561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.446 13:47:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:23.446 13:47:14 -- common/autotest_common.sh@852 -- # return 0 00:05:23.446 13:47:14 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3086186 00:05:23.447 13:47:14 -- event/cpu_locks.sh@85 -- # waitforlisten 3086186 /var/tmp/spdk2.sock 00:05:23.447 13:47:14 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:23.447 13:47:14 -- common/autotest_common.sh@819 -- # '[' -z 3086186 ']' 00:05:23.447 13:47:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:23.447 13:47:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:23.447 13:47:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:23.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:23.447 13:47:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:23.447 13:47:14 -- common/autotest_common.sh@10 -- # set +x 00:05:23.447 [2024-07-23 13:47:14.451499] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:23.447 [2024-07-23 13:47:14.451547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086186 ] 00:05:23.705 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.705 [2024-07-23 13:47:14.528520] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.705 [2024-07-23 13:47:14.528545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.705 [2024-07-23 13:47:14.673919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:23.705 [2024-07-23 13:47:14.674037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.270 13:47:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:24.270 13:47:15 -- common/autotest_common.sh@852 -- # return 0 00:05:24.270 13:47:15 -- event/cpu_locks.sh@87 -- # locks_exist 3086165 00:05:24.270 13:47:15 -- event/cpu_locks.sh@22 -- # lslocks -p 3086165 00:05:24.270 13:47:15 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:24.837 lslocks: write error 00:05:24.837 13:47:15 -- event/cpu_locks.sh@89 -- # killprocess 3086165 00:05:24.837 13:47:15 -- common/autotest_common.sh@926 -- # '[' -z 3086165 ']' 00:05:24.837 13:47:15 -- common/autotest_common.sh@930 -- # kill -0 3086165 00:05:24.837 13:47:15 -- common/autotest_common.sh@931 -- # uname 00:05:24.837 13:47:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:24.837 13:47:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086165 00:05:24.837 13:47:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:24.837 13:47:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:24.837 13:47:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086165' 00:05:24.837 killing process with pid 3086165 00:05:24.837 13:47:15 -- common/autotest_common.sh@945 -- # kill 3086165 00:05:24.837 13:47:15 -- common/autotest_common.sh@950 -- # wait 3086165 00:05:25.403 13:47:16 -- event/cpu_locks.sh@90 -- # killprocess 3086186 00:05:25.403 13:47:16 -- common/autotest_common.sh@926 -- # '[' -z 3086186 ']' 00:05:25.403 13:47:16 -- common/autotest_common.sh@930 -- # kill -0 3086186 00:05:25.403 13:47:16 -- common/autotest_common.sh@931 -- # uname 00:05:25.403 13:47:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:25.403 13:47:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086186 00:05:25.661 13:47:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:25.661 13:47:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:25.661 13:47:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086186' 00:05:25.661 killing process with pid 3086186 00:05:25.661 13:47:16 -- common/autotest_common.sh@945 -- # kill 3086186 00:05:25.661 13:47:16 -- common/autotest_common.sh@950 -- # wait 3086186 00:05:25.921 00:05:25.921 real 0m3.213s 00:05:25.921 user 0m3.432s 00:05:25.921 sys 0m0.876s 00:05:25.921 13:47:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.921 13:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:25.921 ************************************ 00:05:25.921 END TEST non_locking_app_on_locked_coremask 00:05:25.921 ************************************ 00:05:25.921 13:47:16 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:25.921 13:47:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.921 13:47:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.921 13:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:25.921 ************************************ 00:05:25.921 START TEST locking_app_on_unlocked_coremask 00:05:25.921 ************************************ 00:05:25.921 13:47:16 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:05:25.921 13:47:16 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3086677 00:05:25.921 13:47:16 -- event/cpu_locks.sh@99 -- # waitforlisten 3086677 /var/tmp/spdk.sock 00:05:25.921 13:47:16 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:25.921 13:47:16 -- common/autotest_common.sh@819 -- # '[' -z 3086677 ']' 00:05:25.921 13:47:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.921 13:47:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:25.921 13:47:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.921 13:47:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:25.921 13:47:16 -- common/autotest_common.sh@10 -- # set +x 00:05:25.921 [2024-07-23 13:47:16.875330] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:25.921 [2024-07-23 13:47:16.875379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086677 ] 00:05:25.921 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.921 [2024-07-23 13:47:16.928622] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.921 [2024-07-23 13:47:16.928646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.179 [2024-07-23 13:47:17.006420] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:26.179 [2024-07-23 13:47:17.006534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.745 13:47:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.745 13:47:17 -- common/autotest_common.sh@852 -- # return 0 00:05:26.745 13:47:17 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3086907 00:05:26.745 13:47:17 -- event/cpu_locks.sh@103 -- # waitforlisten 3086907 /var/tmp/spdk2.sock 00:05:26.745 13:47:17 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.745 13:47:17 -- common/autotest_common.sh@819 -- # '[' -z 3086907 ']' 00:05:26.745 13:47:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.745 13:47:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:26.745 13:47:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.745 13:47:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:26.745 13:47:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.745 [2024-07-23 13:47:17.714911] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:26.745 [2024-07-23 13:47:17.714959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3086907 ] 00:05:26.745 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.003 [2024-07-23 13:47:17.789831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.003 [2024-07-23 13:47:17.932714] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:27.003 [2024-07-23 13:47:17.932835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.569 13:47:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:27.569 13:47:18 -- common/autotest_common.sh@852 -- # return 0 00:05:27.569 13:47:18 -- event/cpu_locks.sh@105 -- # locks_exist 3086907 00:05:27.569 13:47:18 -- event/cpu_locks.sh@22 -- # lslocks -p 3086907 00:05:27.569 13:47:18 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.827 lslocks: write error 00:05:27.827 13:47:18 -- event/cpu_locks.sh@107 -- # killprocess 3086677 00:05:27.827 13:47:18 -- common/autotest_common.sh@926 -- # '[' -z 3086677 ']' 00:05:27.827 13:47:18 -- common/autotest_common.sh@930 -- # kill -0 3086677 00:05:27.827 13:47:18 -- common/autotest_common.sh@931 -- # uname 00:05:27.827 13:47:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.085 13:47:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086677 00:05:28.085 13:47:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.085 13:47:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.085 13:47:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086677' 00:05:28.085 killing process with pid 3086677 00:05:28.085 13:47:18 -- common/autotest_common.sh@945 -- # kill 3086677 00:05:28.085 13:47:18 -- common/autotest_common.sh@950 -- # wait 3086677 00:05:28.651 13:47:19 -- event/cpu_locks.sh@108 -- # killprocess 3086907 00:05:28.651 13:47:19 -- common/autotest_common.sh@926 -- # '[' -z 3086907 ']' 00:05:28.651 13:47:19 -- common/autotest_common.sh@930 -- # kill -0 3086907 00:05:28.651 13:47:19 -- common/autotest_common.sh@931 -- # uname 00:05:28.651 13:47:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:28.651 13:47:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3086907 00:05:28.651 13:47:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:28.651 13:47:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:28.651 13:47:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3086907' 00:05:28.651 killing process with pid 3086907 00:05:28.651 13:47:19 -- common/autotest_common.sh@945 -- # kill 3086907 00:05:28.651 13:47:19 -- common/autotest_common.sh@950 -- # wait 3086907 00:05:29.218 00:05:29.218 real 0m3.107s 00:05:29.218 user 0m3.325s 00:05:29.218 sys 0m0.833s 00:05:29.218 13:47:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.218 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:29.218 ************************************ 00:05:29.218 END TEST locking_app_on_unlocked_coremask 00:05:29.218 ************************************ 00:05:29.218 13:47:19 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:29.218 13:47:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.218 13:47:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.218 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:29.218 ************************************ 00:05:29.218 START TEST locking_app_on_locked_coremask 00:05:29.218 ************************************ 00:05:29.218 13:47:19 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:05:29.218 13:47:19 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3087213 00:05:29.218 13:47:19 -- event/cpu_locks.sh@116 -- # waitforlisten 3087213 /var/tmp/spdk.sock 00:05:29.218 13:47:19 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.218 13:47:19 -- common/autotest_common.sh@819 -- # '[' -z 3087213 ']' 00:05:29.218 13:47:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.218 13:47:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.218 13:47:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.218 13:47:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.218 13:47:19 -- common/autotest_common.sh@10 -- # set +x 00:05:29.218 [2024-07-23 13:47:20.024471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:29.218 [2024-07-23 13:47:20.024524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087213 ] 00:05:29.218 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.218 [2024-07-23 13:47:20.081126] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.218 [2024-07-23 13:47:20.158307] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:29.218 [2024-07-23 13:47:20.158432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.154 13:47:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.154 13:47:20 -- common/autotest_common.sh@852 -- # return 0 00:05:30.154 13:47:20 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3087420 00:05:30.154 13:47:20 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3087420 /var/tmp/spdk2.sock 00:05:30.154 13:47:20 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:30.154 13:47:20 -- common/autotest_common.sh@640 -- # local es=0 00:05:30.154 13:47:20 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3087420 /var/tmp/spdk2.sock 00:05:30.154 13:47:20 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:30.154 13:47:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.154 13:47:20 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:30.154 13:47:20 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:30.154 13:47:20 -- common/autotest_common.sh@643 -- # waitforlisten 3087420 /var/tmp/spdk2.sock 00:05:30.154 13:47:20 -- common/autotest_common.sh@819 -- # '[' -z 3087420 ']' 00:05:30.154 13:47:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.154 13:47:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:30.154 13:47:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.154 13:47:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:30.154 13:47:20 -- common/autotest_common.sh@10 -- # set +x 00:05:30.154 [2024-07-23 13:47:20.864112] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:30.154 [2024-07-23 13:47:20.864158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087420 ] 00:05:30.154 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.154 [2024-07-23 13:47:20.940475] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3087213 has claimed it. 00:05:30.154 [2024-07-23 13:47:20.940515] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:30.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3087420) - No such process 00:05:30.722 ERROR: process (pid: 3087420) is no longer running 00:05:30.722 13:47:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.722 13:47:21 -- common/autotest_common.sh@852 -- # return 1 00:05:30.722 13:47:21 -- common/autotest_common.sh@643 -- # es=1 00:05:30.722 13:47:21 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:30.722 13:47:21 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:30.722 13:47:21 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:30.722 13:47:21 -- event/cpu_locks.sh@122 -- # locks_exist 3087213 00:05:30.722 13:47:21 -- event/cpu_locks.sh@22 -- # lslocks -p 3087213 00:05:30.722 13:47:21 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.981 lslocks: write error 00:05:30.981 13:47:21 -- event/cpu_locks.sh@124 -- # killprocess 3087213 00:05:30.981 13:47:21 -- common/autotest_common.sh@926 -- # '[' -z 3087213 ']' 00:05:30.981 13:47:21 -- common/autotest_common.sh@930 -- # kill -0 3087213 00:05:30.981 13:47:21 -- common/autotest_common.sh@931 -- # uname 00:05:30.981 13:47:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:30.981 13:47:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3087213 00:05:30.981 13:47:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:30.981 13:47:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:30.981 13:47:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3087213' 00:05:30.981 killing process with pid 3087213 00:05:30.981 13:47:21 -- common/autotest_common.sh@945 -- # kill 3087213 00:05:30.981 13:47:21 -- common/autotest_common.sh@950 -- # wait 3087213 00:05:31.240 00:05:31.240 real 0m2.171s 00:05:31.240 user 0m2.365s 00:05:31.240 sys 0m0.572s 00:05:31.240 13:47:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.240 13:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.240 ************************************ 00:05:31.240 END TEST locking_app_on_locked_coremask 00:05:31.240 ************************************ 00:05:31.240 13:47:22 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:31.240 13:47:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.241 13:47:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.241 13:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 ************************************ 00:05:31.241 START TEST locking_overlapped_coremask 00:05:31.241 ************************************ 00:05:31.241 13:47:22 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:05:31.241 13:47:22 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3087680 00:05:31.241 13:47:22 -- event/cpu_locks.sh@133 -- # waitforlisten 3087680 /var/tmp/spdk.sock 00:05:31.241 13:47:22 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:31.241 13:47:22 -- common/autotest_common.sh@819 -- # '[' -z 3087680 ']' 00:05:31.241 13:47:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.241 13:47:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.241 13:47:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.241 13:47:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.241 13:47:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.241 [2024-07-23 13:47:22.232086] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:31.241 [2024-07-23 13:47:22.232135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087680 ] 00:05:31.241 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.500 [2024-07-23 13:47:22.285502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.500 [2024-07-23 13:47:22.352951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.500 [2024-07-23 13:47:22.353149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.500 [2024-07-23 13:47:22.353244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.500 [2024-07-23 13:47:22.353246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.068 13:47:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.068 13:47:23 -- common/autotest_common.sh@852 -- # return 0 00:05:32.068 13:47:23 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3087915 00:05:32.068 13:47:23 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3087915 /var/tmp/spdk2.sock 00:05:32.068 13:47:23 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:32.068 13:47:23 -- common/autotest_common.sh@640 -- # local es=0 00:05:32.068 13:47:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 3087915 /var/tmp/spdk2.sock 00:05:32.068 13:47:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:05:32.327 13:47:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.327 13:47:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:05:32.327 13:47:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:32.327 13:47:23 -- common/autotest_common.sh@643 -- # waitforlisten 3087915 /var/tmp/spdk2.sock 00:05:32.327 13:47:23 -- common/autotest_common.sh@819 -- # '[' -z 3087915 ']' 00:05:32.327 13:47:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.327 13:47:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.327 13:47:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.327 13:47:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.327 13:47:23 -- common/autotest_common.sh@10 -- # set +x 00:05:32.327 [2024-07-23 13:47:23.129757] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:32.327 [2024-07-23 13:47:23.129806] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3087915 ] 00:05:32.327 EAL: No free 2048 kB hugepages reported on node 1 00:05:32.327 [2024-07-23 13:47:23.205685] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3087680 has claimed it. 00:05:32.327 [2024-07-23 13:47:23.205724] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:32.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (3087915) - No such process 00:05:32.896 ERROR: process (pid: 3087915) is no longer running 00:05:32.896 13:47:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.896 13:47:23 -- common/autotest_common.sh@852 -- # return 1 00:05:32.896 13:47:23 -- common/autotest_common.sh@643 -- # es=1 00:05:32.896 13:47:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:32.896 13:47:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:32.896 13:47:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:32.896 13:47:23 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:32.896 13:47:23 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.896 13:47:23 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.896 13:47:23 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.896 13:47:23 -- event/cpu_locks.sh@141 -- # killprocess 3087680 00:05:32.896 13:47:23 -- common/autotest_common.sh@926 -- # '[' -z 3087680 ']' 00:05:32.896 13:47:23 -- common/autotest_common.sh@930 -- # kill -0 3087680 00:05:32.896 13:47:23 -- common/autotest_common.sh@931 -- # uname 00:05:32.896 13:47:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:32.896 13:47:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3087680 00:05:32.896 13:47:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:32.896 13:47:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:32.896 13:47:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3087680' 00:05:32.896 killing process with pid 3087680 00:05:32.896 13:47:23 -- common/autotest_common.sh@945 -- # kill 3087680 00:05:32.896 13:47:23 -- common/autotest_common.sh@950 -- # wait 3087680 00:05:33.156 00:05:33.156 real 0m1.952s 00:05:33.156 user 0m5.545s 00:05:33.156 sys 0m0.408s 00:05:33.156 13:47:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.156 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.156 ************************************ 00:05:33.156 END TEST locking_overlapped_coremask 00:05:33.156 ************************************ 00:05:33.416 13:47:24 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:33.416 13:47:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:33.416 13:47:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:33.416 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.416 ************************************ 00:05:33.416 START TEST locking_overlapped_coremask_via_rpc 00:05:33.416 ************************************ 00:05:33.416 13:47:24 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:05:33.416 13:47:24 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3088055 00:05:33.416 13:47:24 -- event/cpu_locks.sh@149 -- # waitforlisten 3088055 /var/tmp/spdk.sock 00:05:33.416 13:47:24 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:33.416 13:47:24 -- common/autotest_common.sh@819 -- # '[' -z 3088055 ']' 00:05:33.416 13:47:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.416 13:47:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.416 13:47:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.416 13:47:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.416 13:47:24 -- common/autotest_common.sh@10 -- # set +x 00:05:33.416 [2024-07-23 13:47:24.227177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:33.416 [2024-07-23 13:47:24.227227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088055 ] 00:05:33.416 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.416 [2024-07-23 13:47:24.280650] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:33.416 [2024-07-23 13:47:24.280678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:33.416 [2024-07-23 13:47:24.358833] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.416 [2024-07-23 13:47:24.358974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.416 [2024-07-23 13:47:24.359077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:33.416 [2024-07-23 13:47:24.359080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.398 13:47:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.398 13:47:25 -- common/autotest_common.sh@852 -- # return 0 00:05:34.398 13:47:25 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3088187 00:05:34.398 13:47:25 -- event/cpu_locks.sh@153 -- # waitforlisten 3088187 /var/tmp/spdk2.sock 00:05:34.398 13:47:25 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:34.398 13:47:25 -- common/autotest_common.sh@819 -- # '[' -z 3088187 ']' 00:05:34.398 13:47:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:34.398 13:47:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.398 13:47:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:34.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:34.398 13:47:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.398 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:34.398 [2024-07-23 13:47:25.072700] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:34.398 [2024-07-23 13:47:25.072749] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088187 ] 00:05:34.398 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.398 [2024-07-23 13:47:25.151939] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:34.398 [2024-07-23 13:47:25.151966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.398 [2024-07-23 13:47:25.291503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.398 [2024-07-23 13:47:25.291656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.398 [2024-07-23 13:47:25.295088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.398 [2024-07-23 13:47:25.295089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:35.008 13:47:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.008 13:47:25 -- common/autotest_common.sh@852 -- # return 0 00:05:35.008 13:47:25 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:35.008 13:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.008 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.008 13:47:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.008 13:47:25 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.008 13:47:25 -- common/autotest_common.sh@640 -- # local es=0 00:05:35.008 13:47:25 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.008 13:47:25 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:35.008 13:47:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:35.008 13:47:25 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:35.008 13:47:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:35.008 13:47:25 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:35.008 13:47:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.008 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.008 [2024-07-23 13:47:25.896109] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3088055 has claimed it. 00:05:35.008 request: 00:05:35.008 { 00:05:35.008 "method": "framework_enable_cpumask_locks", 00:05:35.008 "req_id": 1 00:05:35.008 } 00:05:35.008 Got JSON-RPC error response 00:05:35.008 response: 00:05:35.008 { 00:05:35.008 "code": -32603, 00:05:35.008 "message": "Failed to claim CPU core: 2" 00:05:35.008 } 00:05:35.008 13:47:25 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:35.008 13:47:25 -- common/autotest_common.sh@643 -- # es=1 00:05:35.008 13:47:25 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:35.008 13:47:25 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:35.008 13:47:25 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:35.008 13:47:25 -- event/cpu_locks.sh@158 -- # waitforlisten 3088055 /var/tmp/spdk.sock 00:05:35.008 13:47:25 -- common/autotest_common.sh@819 -- # '[' -z 3088055 ']' 00:05:35.008 13:47:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.008 13:47:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.008 13:47:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.008 13:47:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.008 13:47:25 -- common/autotest_common.sh@10 -- # set +x 00:05:35.266 13:47:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.266 13:47:26 -- common/autotest_common.sh@852 -- # return 0 00:05:35.266 13:47:26 -- event/cpu_locks.sh@159 -- # waitforlisten 3088187 /var/tmp/spdk2.sock 00:05:35.266 13:47:26 -- common/autotest_common.sh@819 -- # '[' -z 3088187 ']' 00:05:35.266 13:47:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.266 13:47:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:35.266 13:47:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.266 13:47:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:35.266 13:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.266 13:47:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.266 13:47:26 -- common/autotest_common.sh@852 -- # return 0 00:05:35.266 13:47:26 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:35.266 13:47:26 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:35.266 13:47:26 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:35.266 13:47:26 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:35.266 00:05:35.266 real 0m2.094s 00:05:35.266 user 0m0.857s 00:05:35.266 sys 0m0.162s 00:05:35.266 13:47:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.266 13:47:26 -- common/autotest_common.sh@10 -- # set +x 00:05:35.266 ************************************ 00:05:35.266 END TEST locking_overlapped_coremask_via_rpc 00:05:35.266 ************************************ 00:05:35.525 13:47:26 -- event/cpu_locks.sh@174 -- # cleanup 00:05:35.525 13:47:26 -- event/cpu_locks.sh@15 -- # [[ -z 3088055 ]] 00:05:35.525 13:47:26 -- event/cpu_locks.sh@15 -- # killprocess 3088055 00:05:35.525 13:47:26 -- common/autotest_common.sh@926 -- # '[' -z 3088055 ']' 00:05:35.525 13:47:26 -- common/autotest_common.sh@930 -- # kill -0 3088055 00:05:35.525 13:47:26 -- common/autotest_common.sh@931 -- # uname 00:05:35.525 13:47:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.525 13:47:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3088055 00:05:35.525 13:47:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.525 13:47:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.525 13:47:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3088055' 00:05:35.525 killing process with pid 3088055 00:05:35.525 13:47:26 -- common/autotest_common.sh@945 -- # kill 3088055 00:05:35.525 13:47:26 -- common/autotest_common.sh@950 -- # wait 3088055 00:05:35.784 13:47:26 -- event/cpu_locks.sh@16 -- # [[ -z 3088187 ]] 00:05:35.784 13:47:26 -- event/cpu_locks.sh@16 -- # killprocess 3088187 00:05:35.784 13:47:26 -- common/autotest_common.sh@926 -- # '[' -z 3088187 ']' 00:05:35.784 13:47:26 -- common/autotest_common.sh@930 -- # kill -0 3088187 00:05:35.784 13:47:26 -- common/autotest_common.sh@931 -- # uname 00:05:35.784 13:47:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.784 13:47:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3088187 00:05:35.784 13:47:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:35.784 13:47:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:35.784 13:47:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3088187' 00:05:35.784 killing process with pid 3088187 00:05:35.784 13:47:26 -- common/autotest_common.sh@945 -- # kill 3088187 00:05:35.784 13:47:26 -- common/autotest_common.sh@950 -- # wait 3088187 00:05:36.352 13:47:27 -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.352 13:47:27 -- event/cpu_locks.sh@1 -- # cleanup 00:05:36.352 13:47:27 -- event/cpu_locks.sh@15 -- # [[ -z 3088055 ]] 00:05:36.352 13:47:27 -- event/cpu_locks.sh@15 -- # killprocess 3088055 00:05:36.352 13:47:27 -- common/autotest_common.sh@926 -- # '[' -z 3088055 ']' 00:05:36.352 13:47:27 -- common/autotest_common.sh@930 -- # kill -0 3088055 00:05:36.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3088055) - No such process 00:05:36.352 13:47:27 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3088055 is not found' 00:05:36.352 Process with pid 3088055 is not found 00:05:36.352 13:47:27 -- event/cpu_locks.sh@16 -- # [[ -z 3088187 ]] 00:05:36.352 13:47:27 -- event/cpu_locks.sh@16 -- # killprocess 3088187 00:05:36.352 13:47:27 -- common/autotest_common.sh@926 -- # '[' -z 3088187 ']' 00:05:36.352 13:47:27 -- common/autotest_common.sh@930 -- # kill -0 3088187 00:05:36.352 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3088187) - No such process 00:05:36.352 13:47:27 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3088187 is not found' 00:05:36.352 Process with pid 3088187 is not found 00:05:36.352 13:47:27 -- event/cpu_locks.sh@18 -- # rm -f 00:05:36.352 00:05:36.352 real 0m16.736s 00:05:36.352 user 0m29.286s 00:05:36.352 sys 0m4.608s 00:05:36.352 13:47:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.352 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.352 ************************************ 00:05:36.352 END TEST cpu_locks 00:05:36.352 ************************************ 00:05:36.352 00:05:36.352 real 0m41.797s 00:05:36.352 user 1m20.882s 00:05:36.352 sys 0m7.683s 00:05:36.352 13:47:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.352 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.352 ************************************ 00:05:36.352 END TEST event 00:05:36.352 ************************************ 00:05:36.352 13:47:27 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:36.352 13:47:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:36.352 13:47:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.352 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.352 ************************************ 00:05:36.352 START TEST thread 00:05:36.352 ************************************ 00:05:36.352 13:47:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:36.352 * Looking for test storage... 00:05:36.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:36.352 13:47:27 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.352 13:47:27 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:36.352 13:47:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.352 13:47:27 -- common/autotest_common.sh@10 -- # set +x 00:05:36.352 ************************************ 00:05:36.352 START TEST thread_poller_perf 00:05:36.352 ************************************ 00:05:36.352 13:47:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:36.352 [2024-07-23 13:47:27.266730] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:36.352 [2024-07-23 13:47:27.266812] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088746 ] 00:05:36.352 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.352 [2024-07-23 13:47:27.323188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.611 [2024-07-23 13:47:27.396965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.611 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:37.548 ====================================== 00:05:37.548 busy:2308654408 (cyc) 00:05:37.548 total_run_count: 391000 00:05:37.548 tsc_hz: 2300000000 (cyc) 00:05:37.548 ====================================== 00:05:37.548 poller_cost: 5904 (cyc), 2566 (nsec) 00:05:37.548 00:05:37.548 real 0m1.247s 00:05:37.548 user 0m1.176s 00:05:37.548 sys 0m0.067s 00:05:37.548 13:47:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.548 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:37.548 ************************************ 00:05:37.548 END TEST thread_poller_perf 00:05:37.548 ************************************ 00:05:37.548 13:47:28 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.548 13:47:28 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:37.548 13:47:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.548 13:47:28 -- common/autotest_common.sh@10 -- # set +x 00:05:37.548 ************************************ 00:05:37.548 START TEST thread_poller_perf 00:05:37.548 ************************************ 00:05:37.548 13:47:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:37.548 [2024-07-23 13:47:28.554466] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:37.548 [2024-07-23 13:47:28.554537] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3088997 ] 00:05:37.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.807 [2024-07-23 13:47:28.611936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.807 [2024-07-23 13:47:28.680735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.807 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:39.187 ====================================== 00:05:39.187 busy:2302026574 (cyc) 00:05:39.187 total_run_count: 5465000 00:05:39.187 tsc_hz: 2300000000 (cyc) 00:05:39.187 ====================================== 00:05:39.187 poller_cost: 421 (cyc), 183 (nsec) 00:05:39.187 00:05:39.187 real 0m1.239s 00:05:39.187 user 0m1.164s 00:05:39.187 sys 0m0.071s 00:05:39.187 13:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.187 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.187 ************************************ 00:05:39.187 END TEST thread_poller_perf 00:05:39.187 ************************************ 00:05:39.187 13:47:29 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:39.187 00:05:39.187 real 0m2.646s 00:05:39.187 user 0m2.407s 00:05:39.187 sys 0m0.251s 00:05:39.187 13:47:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.187 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.187 ************************************ 00:05:39.187 END TEST thread 00:05:39.187 ************************************ 00:05:39.187 13:47:29 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:39.187 13:47:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.187 13:47:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.187 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.187 ************************************ 00:05:39.187 START TEST accel 00:05:39.187 ************************************ 00:05:39.187 13:47:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:39.187 * Looking for test storage... 00:05:39.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:39.187 13:47:29 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:39.187 13:47:29 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:39.187 13:47:29 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.187 13:47:29 -- accel/accel.sh@59 -- # spdk_tgt_pid=3089289 00:05:39.187 13:47:29 -- accel/accel.sh@60 -- # waitforlisten 3089289 00:05:39.187 13:47:29 -- common/autotest_common.sh@819 -- # '[' -z 3089289 ']' 00:05:39.187 13:47:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.187 13:47:29 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:39.187 13:47:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.187 13:47:29 -- accel/accel.sh@58 -- # build_accel_config 00:05:39.187 13:47:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.187 13:47:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.187 13:47:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.187 13:47:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.187 13:47:29 -- common/autotest_common.sh@10 -- # set +x 00:05:39.187 13:47:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.187 13:47:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.187 13:47:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.187 13:47:29 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.187 13:47:29 -- accel/accel.sh@42 -- # jq -r . 00:05:39.187 [2024-07-23 13:47:29.970195] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:39.187 [2024-07-23 13:47:29.970246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089289 ] 00:05:39.187 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.187 [2024-07-23 13:47:30.025104] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.187 [2024-07-23 13:47:30.106983] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:39.187 [2024-07-23 13:47:30.107114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.125 13:47:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:40.125 13:47:30 -- common/autotest_common.sh@852 -- # return 0 00:05:40.125 13:47:30 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:40.125 13:47:30 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:40.125 13:47:30 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:40.125 13:47:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:40.125 13:47:30 -- common/autotest_common.sh@10 -- # set +x 00:05:40.125 13:47:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # IFS== 00:05:40.125 13:47:30 -- accel/accel.sh@64 -- # read -r opc module 00:05:40.125 13:47:30 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:40.125 13:47:30 -- accel/accel.sh@67 -- # killprocess 3089289 00:05:40.125 13:47:30 -- common/autotest_common.sh@926 -- # '[' -z 3089289 ']' 00:05:40.125 13:47:30 -- common/autotest_common.sh@930 -- # kill -0 3089289 00:05:40.125 13:47:30 -- common/autotest_common.sh@931 -- # uname 00:05:40.125 13:47:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:40.125 13:47:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3089289 00:05:40.125 13:47:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:40.125 13:47:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:40.125 13:47:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3089289' 00:05:40.125 killing process with pid 3089289 00:05:40.125 13:47:30 -- common/autotest_common.sh@945 -- # kill 3089289 00:05:40.125 13:47:30 -- common/autotest_common.sh@950 -- # wait 3089289 00:05:40.385 13:47:31 -- accel/accel.sh@68 -- # trap - ERR 00:05:40.385 13:47:31 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:40.385 13:47:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:40.385 13:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.385 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.385 13:47:31 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:40.385 13:47:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:40.385 13:47:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.385 13:47:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.385 13:47:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.385 13:47:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.385 13:47:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.385 13:47:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.385 13:47:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.385 13:47:31 -- accel/accel.sh@42 -- # jq -r . 00:05:40.385 13:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.385 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.385 13:47:31 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:40.385 13:47:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:40.385 13:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.385 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.385 ************************************ 00:05:40.385 START TEST accel_missing_filename 00:05:40.385 ************************************ 00:05:40.385 13:47:31 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:40.385 13:47:31 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.385 13:47:31 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:40.385 13:47:31 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:40.385 13:47:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.385 13:47:31 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:40.385 13:47:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.385 13:47:31 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:40.385 13:47:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:40.385 13:47:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.385 13:47:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.385 13:47:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.385 13:47:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.385 13:47:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.385 13:47:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.385 13:47:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.385 13:47:31 -- accel/accel.sh@42 -- # jq -r . 00:05:40.385 [2024-07-23 13:47:31.316835] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:40.385 [2024-07-23 13:47:31.316901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089555 ] 00:05:40.385 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.385 [2024-07-23 13:47:31.374754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.644 [2024-07-23 13:47:31.445919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.644 [2024-07-23 13:47:31.487109] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:40.644 [2024-07-23 13:47:31.547198] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:40.644 A filename is required. 00:05:40.644 13:47:31 -- common/autotest_common.sh@643 -- # es=234 00:05:40.645 13:47:31 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:40.645 13:47:31 -- common/autotest_common.sh@652 -- # es=106 00:05:40.645 13:47:31 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:40.645 13:47:31 -- common/autotest_common.sh@660 -- # es=1 00:05:40.645 13:47:31 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:40.645 00:05:40.645 real 0m0.354s 00:05:40.645 user 0m0.276s 00:05:40.645 sys 0m0.114s 00:05:40.645 13:47:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.645 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.645 ************************************ 00:05:40.645 END TEST accel_missing_filename 00:05:40.645 ************************************ 00:05:40.905 13:47:31 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.905 13:47:31 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:40.905 13:47:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.905 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:05:40.905 ************************************ 00:05:40.905 START TEST accel_compress_verify 00:05:40.905 ************************************ 00:05:40.905 13:47:31 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.905 13:47:31 -- common/autotest_common.sh@640 -- # local es=0 00:05:40.905 13:47:31 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.905 13:47:31 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:40.905 13:47:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.905 13:47:31 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:40.905 13:47:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:40.905 13:47:31 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.905 13:47:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:40.905 13:47:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.905 13:47:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.905 13:47:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.905 13:47:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.905 13:47:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.905 13:47:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.905 13:47:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.905 13:47:31 -- accel/accel.sh@42 -- # jq -r . 00:05:40.905 [2024-07-23 13:47:31.706591] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:40.905 [2024-07-23 13:47:31.706665] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089587 ] 00:05:40.905 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.905 [2024-07-23 13:47:31.760919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.905 [2024-07-23 13:47:31.831024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.905 [2024-07-23 13:47:31.871842] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.166 [2024-07-23 13:47:31.932398] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:41.166 00:05:41.166 Compression does not support the verify option, aborting. 00:05:41.166 13:47:32 -- common/autotest_common.sh@643 -- # es=161 00:05:41.166 13:47:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.166 13:47:32 -- common/autotest_common.sh@652 -- # es=33 00:05:41.166 13:47:32 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:41.166 13:47:32 -- common/autotest_common.sh@660 -- # es=1 00:05:41.166 13:47:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.166 00:05:41.166 real 0m0.349s 00:05:41.166 user 0m0.273s 00:05:41.166 sys 0m0.109s 00:05:41.166 13:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.166 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.166 ************************************ 00:05:41.166 END TEST accel_compress_verify 00:05:41.166 ************************************ 00:05:41.166 13:47:32 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:41.166 13:47:32 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:41.166 13:47:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.166 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.166 ************************************ 00:05:41.166 START TEST accel_wrong_workload 00:05:41.166 ************************************ 00:05:41.166 13:47:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:41.166 13:47:32 -- common/autotest_common.sh@640 -- # local es=0 00:05:41.166 13:47:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:41.166 13:47:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:41.166 13:47:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.166 13:47:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:41.166 13:47:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.166 13:47:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:41.166 13:47:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:41.166 13:47:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.166 13:47:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.166 13:47:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.166 13:47:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.166 13:47:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.166 13:47:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.166 13:47:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.166 13:47:32 -- accel/accel.sh@42 -- # jq -r . 00:05:41.166 Unsupported workload type: foobar 00:05:41.166 [2024-07-23 13:47:32.086027] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:41.166 accel_perf options: 00:05:41.166 [-h help message] 00:05:41.166 [-q queue depth per core] 00:05:41.166 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.166 [-T number of threads per core 00:05:41.166 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.166 [-t time in seconds] 00:05:41.166 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.166 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.167 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.167 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.167 [-S for crc32c workload, use this seed value (default 0) 00:05:41.167 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.167 [-f for fill workload, use this BYTE value (default 255) 00:05:41.167 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.167 [-y verify result if this switch is on] 00:05:41.167 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.167 Can be used to spread operations across a wider range of memory. 00:05:41.167 13:47:32 -- common/autotest_common.sh@643 -- # es=1 00:05:41.167 13:47:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.167 13:47:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:41.167 13:47:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.167 00:05:41.167 real 0m0.034s 00:05:41.167 user 0m0.018s 00:05:41.167 sys 0m0.016s 00:05:41.167 13:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.167 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.167 ************************************ 00:05:41.167 END TEST accel_wrong_workload 00:05:41.167 ************************************ 00:05:41.167 Error: writing output failed: Broken pipe 00:05:41.167 13:47:32 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.167 13:47:32 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:41.167 13:47:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.167 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.167 ************************************ 00:05:41.167 START TEST accel_negative_buffers 00:05:41.167 ************************************ 00:05:41.167 13:47:32 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:41.167 13:47:32 -- common/autotest_common.sh@640 -- # local es=0 00:05:41.167 13:47:32 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:41.167 13:47:32 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:41.167 13:47:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.167 13:47:32 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:41.167 13:47:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:41.167 13:47:32 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:41.167 13:47:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:41.167 13:47:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.167 13:47:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.167 13:47:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.167 13:47:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.167 13:47:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.167 13:47:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.167 13:47:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.167 13:47:32 -- accel/accel.sh@42 -- # jq -r . 00:05:41.167 -x option must be non-negative. 00:05:41.167 [2024-07-23 13:47:32.151503] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:41.167 accel_perf options: 00:05:41.167 [-h help message] 00:05:41.167 [-q queue depth per core] 00:05:41.167 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:41.167 [-T number of threads per core 00:05:41.167 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:41.167 [-t time in seconds] 00:05:41.167 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:41.167 [ dif_verify, , dif_generate, dif_generate_copy 00:05:41.167 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:41.167 [-l for compress/decompress workloads, name of uncompressed input file 00:05:41.167 [-S for crc32c workload, use this seed value (default 0) 00:05:41.167 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:41.167 [-f for fill workload, use this BYTE value (default 255) 00:05:41.167 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:41.167 [-y verify result if this switch is on] 00:05:41.167 [-a tasks to allocate per core (default: same value as -q)] 00:05:41.167 Can be used to spread operations across a wider range of memory. 00:05:41.167 13:47:32 -- common/autotest_common.sh@643 -- # es=1 00:05:41.167 13:47:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:41.167 13:47:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:41.167 13:47:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:41.167 00:05:41.167 real 0m0.028s 00:05:41.167 user 0m0.018s 00:05:41.167 sys 0m0.009s 00:05:41.167 13:47:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.167 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.167 ************************************ 00:05:41.167 END TEST accel_negative_buffers 00:05:41.167 ************************************ 00:05:41.167 Error: writing output failed: Broken pipe 00:05:41.427 13:47:32 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:41.427 13:47:32 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:41.427 13:47:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.427 13:47:32 -- common/autotest_common.sh@10 -- # set +x 00:05:41.427 ************************************ 00:05:41.427 START TEST accel_crc32c 00:05:41.427 ************************************ 00:05:41.427 13:47:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:41.427 13:47:32 -- accel/accel.sh@16 -- # local accel_opc 00:05:41.427 13:47:32 -- accel/accel.sh@17 -- # local accel_module 00:05:41.427 13:47:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:41.427 13:47:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:41.427 13:47:32 -- accel/accel.sh@12 -- # build_accel_config 00:05:41.427 13:47:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:41.427 13:47:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.427 13:47:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.427 13:47:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:41.427 13:47:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:41.427 13:47:32 -- accel/accel.sh@41 -- # local IFS=, 00:05:41.427 13:47:32 -- accel/accel.sh@42 -- # jq -r . 00:05:41.427 [2024-07-23 13:47:32.221588] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:41.427 [2024-07-23 13:47:32.221657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089641 ] 00:05:41.427 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.427 [2024-07-23 13:47:32.279031] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.427 [2024-07-23 13:47:32.353421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.804 13:47:33 -- accel/accel.sh@18 -- # out=' 00:05:42.804 SPDK Configuration: 00:05:42.804 Core mask: 0x1 00:05:42.804 00:05:42.804 Accel Perf Configuration: 00:05:42.804 Workload Type: crc32c 00:05:42.804 CRC-32C seed: 32 00:05:42.804 Transfer size: 4096 bytes 00:05:42.804 Vector count 1 00:05:42.804 Module: software 00:05:42.804 Queue depth: 32 00:05:42.804 Allocate depth: 32 00:05:42.804 # threads/core: 1 00:05:42.804 Run time: 1 seconds 00:05:42.804 Verify: Yes 00:05:42.804 00:05:42.804 Running for 1 seconds... 00:05:42.804 00:05:42.804 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.804 ------------------------------------------------------------------------------------ 00:05:42.804 0,0 569600/s 2225 MiB/s 0 0 00:05:42.804 ==================================================================================== 00:05:42.804 Total 569600/s 2225 MiB/s 0 0' 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:42.804 13:47:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:42.804 13:47:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.804 13:47:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.804 13:47:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.804 13:47:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.804 13:47:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.804 13:47:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.804 13:47:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.804 13:47:33 -- accel/accel.sh@42 -- # jq -r . 00:05:42.804 [2024-07-23 13:47:33.577237] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:42.804 [2024-07-23 13:47:33.577297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3089873 ] 00:05:42.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.804 [2024-07-23 13:47:33.630412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.804 [2024-07-23 13:47:33.702922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=0x1 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=crc32c 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=32 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=software 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=32 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=32 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val=1 00:05:42.804 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.804 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.804 13:47:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.805 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.805 13:47:33 -- accel/accel.sh@21 -- # val=Yes 00:05:42.805 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.805 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.805 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:42.805 13:47:33 -- accel/accel.sh@21 -- # val= 00:05:42.805 13:47:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # IFS=: 00:05:42.805 13:47:33 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@21 -- # val= 00:05:44.184 13:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # IFS=: 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@21 -- # val= 00:05:44.184 13:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # IFS=: 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@21 -- # val= 00:05:44.184 13:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # IFS=: 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@21 -- # val= 00:05:44.184 13:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # IFS=: 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@21 -- # val= 00:05:44.184 13:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # IFS=: 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@21 -- # val= 00:05:44.184 13:47:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # IFS=: 00:05:44.184 13:47:34 -- accel/accel.sh@20 -- # read -r var val 00:05:44.184 13:47:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:44.184 13:47:34 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:44.184 13:47:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.184 00:05:44.184 real 0m2.714s 00:05:44.184 user 0m2.495s 00:05:44.184 sys 0m0.226s 00:05:44.184 13:47:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.184 13:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.184 ************************************ 00:05:44.184 END TEST accel_crc32c 00:05:44.184 ************************************ 00:05:44.184 13:47:34 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:44.184 13:47:34 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:44.184 13:47:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.184 13:47:34 -- common/autotest_common.sh@10 -- # set +x 00:05:44.184 ************************************ 00:05:44.184 START TEST accel_crc32c_C2 00:05:44.184 ************************************ 00:05:44.184 13:47:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:44.184 13:47:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:44.184 13:47:34 -- accel/accel.sh@17 -- # local accel_module 00:05:44.184 13:47:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:44.184 13:47:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:44.184 13:47:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:44.184 13:47:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:44.184 13:47:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.184 13:47:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.184 13:47:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:44.184 13:47:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:44.184 13:47:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:44.184 13:47:34 -- accel/accel.sh@42 -- # jq -r . 00:05:44.184 [2024-07-23 13:47:34.970060] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:44.184 [2024-07-23 13:47:34.970137] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090131 ] 00:05:44.184 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.184 [2024-07-23 13:47:35.024449] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.184 [2024-07-23 13:47:35.094488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.561 13:47:36 -- accel/accel.sh@18 -- # out=' 00:05:45.561 SPDK Configuration: 00:05:45.561 Core mask: 0x1 00:05:45.561 00:05:45.561 Accel Perf Configuration: 00:05:45.561 Workload Type: crc32c 00:05:45.561 CRC-32C seed: 0 00:05:45.561 Transfer size: 4096 bytes 00:05:45.561 Vector count 2 00:05:45.561 Module: software 00:05:45.561 Queue depth: 32 00:05:45.561 Allocate depth: 32 00:05:45.561 # threads/core: 1 00:05:45.561 Run time: 1 seconds 00:05:45.561 Verify: Yes 00:05:45.561 00:05:45.561 Running for 1 seconds... 00:05:45.561 00:05:45.561 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.561 ------------------------------------------------------------------------------------ 00:05:45.561 0,0 441280/s 3447 MiB/s 0 0 00:05:45.561 ==================================================================================== 00:05:45.561 Total 441280/s 1723 MiB/s 0 0' 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:45.561 13:47:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:45.561 13:47:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.561 13:47:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.561 13:47:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.561 13:47:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.561 13:47:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.561 13:47:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.561 13:47:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.561 13:47:36 -- accel/accel.sh@42 -- # jq -r . 00:05:45.561 [2024-07-23 13:47:36.317495] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:45.561 [2024-07-23 13:47:36.317553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090363 ] 00:05:45.561 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.561 [2024-07-23 13:47:36.371463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.561 [2024-07-23 13:47:36.443716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=0x1 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=crc32c 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=0 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=software 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=32 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=32 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=1 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val=Yes 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:45.561 13:47:36 -- accel/accel.sh@21 -- # val= 00:05:45.561 13:47:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # IFS=: 00:05:45.561 13:47:36 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@21 -- # val= 00:05:46.940 13:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # IFS=: 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@21 -- # val= 00:05:46.940 13:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # IFS=: 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@21 -- # val= 00:05:46.940 13:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # IFS=: 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@21 -- # val= 00:05:46.940 13:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # IFS=: 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@21 -- # val= 00:05:46.940 13:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # IFS=: 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@21 -- # val= 00:05:46.940 13:47:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # IFS=: 00:05:46.940 13:47:37 -- accel/accel.sh@20 -- # read -r var val 00:05:46.940 13:47:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.940 13:47:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:46.940 13:47:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.940 00:05:46.940 real 0m2.704s 00:05:46.940 user 0m2.494s 00:05:46.940 sys 0m0.219s 00:05:46.940 13:47:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.940 13:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.940 ************************************ 00:05:46.940 END TEST accel_crc32c_C2 00:05:46.940 ************************************ 00:05:46.940 13:47:37 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:46.940 13:47:37 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:46.940 13:47:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.940 13:47:37 -- common/autotest_common.sh@10 -- # set +x 00:05:46.940 ************************************ 00:05:46.940 START TEST accel_copy 00:05:46.940 ************************************ 00:05:46.940 13:47:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:46.940 13:47:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.940 13:47:37 -- accel/accel.sh@17 -- # local accel_module 00:05:46.940 13:47:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:46.940 13:47:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:46.940 13:47:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.940 13:47:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.940 13:47:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.940 13:47:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.940 13:47:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.940 13:47:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.940 13:47:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.940 13:47:37 -- accel/accel.sh@42 -- # jq -r . 00:05:46.940 [2024-07-23 13:47:37.706403] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:46.940 [2024-07-23 13:47:37.706478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090616 ] 00:05:46.940 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.940 [2024-07-23 13:47:37.762737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.940 [2024-07-23 13:47:37.833566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.321 13:47:39 -- accel/accel.sh@18 -- # out=' 00:05:48.321 SPDK Configuration: 00:05:48.321 Core mask: 0x1 00:05:48.321 00:05:48.321 Accel Perf Configuration: 00:05:48.321 Workload Type: copy 00:05:48.321 Transfer size: 4096 bytes 00:05:48.321 Vector count 1 00:05:48.321 Module: software 00:05:48.321 Queue depth: 32 00:05:48.321 Allocate depth: 32 00:05:48.321 # threads/core: 1 00:05:48.321 Run time: 1 seconds 00:05:48.321 Verify: Yes 00:05:48.321 00:05:48.321 Running for 1 seconds... 00:05:48.321 00:05:48.321 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.321 ------------------------------------------------------------------------------------ 00:05:48.321 0,0 423264/s 1653 MiB/s 0 0 00:05:48.321 ==================================================================================== 00:05:48.321 Total 423264/s 1653 MiB/s 0 0' 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:48.321 13:47:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:48.321 13:47:39 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.321 13:47:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.321 13:47:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.321 13:47:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.321 13:47:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.321 13:47:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.321 13:47:39 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.321 13:47:39 -- accel/accel.sh@42 -- # jq -r . 00:05:48.321 [2024-07-23 13:47:39.054831] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:48.321 [2024-07-23 13:47:39.054889] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3090854 ] 00:05:48.321 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.321 [2024-07-23 13:47:39.109035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.321 [2024-07-23 13:47:39.180914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val=0x1 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val=copy 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val=software 00:05:48.321 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.321 13:47:39 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.321 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.321 13:47:39 -- accel/accel.sh@21 -- # val=32 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.322 13:47:39 -- accel/accel.sh@21 -- # val=32 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.322 13:47:39 -- accel/accel.sh@21 -- # val=1 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.322 13:47:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.322 13:47:39 -- accel/accel.sh@21 -- # val=Yes 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.322 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:48.322 13:47:39 -- accel/accel.sh@21 -- # val= 00:05:48.322 13:47:39 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # IFS=: 00:05:48.322 13:47:39 -- accel/accel.sh@20 -- # read -r var val 00:05:49.701 13:47:40 -- accel/accel.sh@21 -- # val= 00:05:49.701 13:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.701 13:47:40 -- accel/accel.sh@20 -- # IFS=: 00:05:49.701 13:47:40 -- accel/accel.sh@20 -- # read -r var val 00:05:49.701 13:47:40 -- accel/accel.sh@21 -- # val= 00:05:49.701 13:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.701 13:47:40 -- accel/accel.sh@20 -- # IFS=: 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # read -r var val 00:05:49.702 13:47:40 -- accel/accel.sh@21 -- # val= 00:05:49.702 13:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # IFS=: 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # read -r var val 00:05:49.702 13:47:40 -- accel/accel.sh@21 -- # val= 00:05:49.702 13:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # IFS=: 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # read -r var val 00:05:49.702 13:47:40 -- accel/accel.sh@21 -- # val= 00:05:49.702 13:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # IFS=: 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # read -r var val 00:05:49.702 13:47:40 -- accel/accel.sh@21 -- # val= 00:05:49.702 13:47:40 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # IFS=: 00:05:49.702 13:47:40 -- accel/accel.sh@20 -- # read -r var val 00:05:49.702 13:47:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.702 13:47:40 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:49.702 13:47:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.702 00:05:49.702 real 0m2.704s 00:05:49.702 user 0m2.488s 00:05:49.702 sys 0m0.222s 00:05:49.702 13:47:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.702 13:47:40 -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 ************************************ 00:05:49.702 END TEST accel_copy 00:05:49.702 ************************************ 00:05:49.702 13:47:40 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.702 13:47:40 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:49.702 13:47:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.702 13:47:40 -- common/autotest_common.sh@10 -- # set +x 00:05:49.702 ************************************ 00:05:49.702 START TEST accel_fill 00:05:49.702 ************************************ 00:05:49.702 13:47:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.702 13:47:40 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.702 13:47:40 -- accel/accel.sh@17 -- # local accel_module 00:05:49.702 13:47:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.702 13:47:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:49.702 13:47:40 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.702 13:47:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.702 13:47:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.702 13:47:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.702 13:47:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.702 13:47:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.702 13:47:40 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.702 13:47:40 -- accel/accel.sh@42 -- # jq -r . 00:05:49.702 [2024-07-23 13:47:40.443911] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:49.702 [2024-07-23 13:47:40.443977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091107 ] 00:05:49.702 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.702 [2024-07-23 13:47:40.500398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.702 [2024-07-23 13:47:40.570242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.081 13:47:41 -- accel/accel.sh@18 -- # out=' 00:05:51.081 SPDK Configuration: 00:05:51.081 Core mask: 0x1 00:05:51.081 00:05:51.081 Accel Perf Configuration: 00:05:51.081 Workload Type: fill 00:05:51.081 Fill pattern: 0x80 00:05:51.081 Transfer size: 4096 bytes 00:05:51.081 Vector count 1 00:05:51.081 Module: software 00:05:51.081 Queue depth: 64 00:05:51.081 Allocate depth: 64 00:05:51.081 # threads/core: 1 00:05:51.081 Run time: 1 seconds 00:05:51.081 Verify: Yes 00:05:51.081 00:05:51.081 Running for 1 seconds... 00:05:51.081 00:05:51.081 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.081 ------------------------------------------------------------------------------------ 00:05:51.081 0,0 656448/s 2564 MiB/s 0 0 00:05:51.081 ==================================================================================== 00:05:51.081 Total 656448/s 2564 MiB/s 0 0' 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.081 13:47:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:51.081 13:47:41 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.081 13:47:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.081 13:47:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.081 13:47:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.081 13:47:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.081 13:47:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.081 13:47:41 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.081 13:47:41 -- accel/accel.sh@42 -- # jq -r . 00:05:51.081 [2024-07-23 13:47:41.793561] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:51.081 [2024-07-23 13:47:41.793618] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091339 ] 00:05:51.081 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.081 [2024-07-23 13:47:41.846911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.081 [2024-07-23 13:47:41.917080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=0x1 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=fill 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=0x80 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=software 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=64 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=64 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=1 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val=Yes 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:51.081 13:47:41 -- accel/accel.sh@21 -- # val= 00:05:51.081 13:47:41 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # IFS=: 00:05:51.081 13:47:41 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@21 -- # val= 00:05:52.495 13:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # IFS=: 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@21 -- # val= 00:05:52.495 13:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # IFS=: 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@21 -- # val= 00:05:52.495 13:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # IFS=: 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@21 -- # val= 00:05:52.495 13:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # IFS=: 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@21 -- # val= 00:05:52.495 13:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # IFS=: 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@21 -- # val= 00:05:52.495 13:47:43 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # IFS=: 00:05:52.495 13:47:43 -- accel/accel.sh@20 -- # read -r var val 00:05:52.495 13:47:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.495 13:47:43 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:52.495 13:47:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.496 00:05:52.496 real 0m2.703s 00:05:52.496 user 0m2.491s 00:05:52.496 sys 0m0.221s 00:05:52.496 13:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.496 13:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:52.496 ************************************ 00:05:52.496 END TEST accel_fill 00:05:52.496 ************************************ 00:05:52.496 13:47:43 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:52.496 13:47:43 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:52.496 13:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.496 13:47:43 -- common/autotest_common.sh@10 -- # set +x 00:05:52.496 ************************************ 00:05:52.496 START TEST accel_copy_crc32c 00:05:52.496 ************************************ 00:05:52.496 13:47:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:52.496 13:47:43 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.496 13:47:43 -- accel/accel.sh@17 -- # local accel_module 00:05:52.496 13:47:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:52.496 13:47:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:52.496 13:47:43 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.496 13:47:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.496 13:47:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.496 13:47:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.496 13:47:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.496 13:47:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.496 13:47:43 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.496 13:47:43 -- accel/accel.sh@42 -- # jq -r . 00:05:52.496 [2024-07-23 13:47:43.180895] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:52.496 [2024-07-23 13:47:43.180956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091594 ] 00:05:52.496 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.496 [2024-07-23 13:47:43.234473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.496 [2024-07-23 13:47:43.304769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.874 13:47:44 -- accel/accel.sh@18 -- # out=' 00:05:53.874 SPDK Configuration: 00:05:53.874 Core mask: 0x1 00:05:53.874 00:05:53.874 Accel Perf Configuration: 00:05:53.874 Workload Type: copy_crc32c 00:05:53.874 CRC-32C seed: 0 00:05:53.874 Vector size: 4096 bytes 00:05:53.874 Transfer size: 4096 bytes 00:05:53.874 Vector count 1 00:05:53.874 Module: software 00:05:53.874 Queue depth: 32 00:05:53.874 Allocate depth: 32 00:05:53.874 # threads/core: 1 00:05:53.874 Run time: 1 seconds 00:05:53.874 Verify: Yes 00:05:53.874 00:05:53.874 Running for 1 seconds... 00:05:53.874 00:05:53.874 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:53.874 ------------------------------------------------------------------------------------ 00:05:53.874 0,0 324896/s 1269 MiB/s 0 0 00:05:53.874 ==================================================================================== 00:05:53.874 Total 324896/s 1269 MiB/s 0 0' 00:05:53.874 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.874 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:53.875 13:47:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:53.875 13:47:44 -- accel/accel.sh@12 -- # build_accel_config 00:05:53.875 13:47:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:53.875 13:47:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.875 13:47:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.875 13:47:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:53.875 13:47:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:53.875 13:47:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:53.875 13:47:44 -- accel/accel.sh@42 -- # jq -r . 00:05:53.875 [2024-07-23 13:47:44.530305] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:53.875 [2024-07-23 13:47:44.530381] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3091830 ] 00:05:53.875 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.875 [2024-07-23 13:47:44.584415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.875 [2024-07-23 13:47:44.652288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=0x1 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=0 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=software 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@23 -- # accel_module=software 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=32 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=32 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=1 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val=Yes 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:53.875 13:47:44 -- accel/accel.sh@21 -- # val= 00:05:53.875 13:47:44 -- accel/accel.sh@22 -- # case "$var" in 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # IFS=: 00:05:53.875 13:47:44 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@21 -- # val= 00:05:55.255 13:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # IFS=: 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@21 -- # val= 00:05:55.255 13:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # IFS=: 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@21 -- # val= 00:05:55.255 13:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # IFS=: 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@21 -- # val= 00:05:55.255 13:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # IFS=: 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@21 -- # val= 00:05:55.255 13:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # IFS=: 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@21 -- # val= 00:05:55.255 13:47:45 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # IFS=: 00:05:55.255 13:47:45 -- accel/accel.sh@20 -- # read -r var val 00:05:55.255 13:47:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.255 13:47:45 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:55.255 13:47:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.255 00:05:55.255 real 0m2.700s 00:05:55.255 user 0m2.483s 00:05:55.255 sys 0m0.225s 00:05:55.255 13:47:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.255 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:55.255 ************************************ 00:05:55.255 END TEST accel_copy_crc32c 00:05:55.255 ************************************ 00:05:55.255 13:47:45 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.255 13:47:45 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:55.255 13:47:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.255 13:47:45 -- common/autotest_common.sh@10 -- # set +x 00:05:55.255 ************************************ 00:05:55.255 START TEST accel_copy_crc32c_C2 00:05:55.255 ************************************ 00:05:55.255 13:47:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:55.255 13:47:45 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.255 13:47:45 -- accel/accel.sh@17 -- # local accel_module 00:05:55.255 13:47:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:55.255 13:47:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:55.255 13:47:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.255 13:47:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.255 13:47:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.255 13:47:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.255 13:47:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.255 13:47:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.255 13:47:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.255 13:47:45 -- accel/accel.sh@42 -- # jq -r . 00:05:55.255 [2024-07-23 13:47:45.914200] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:55.255 [2024-07-23 13:47:45.914274] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092083 ] 00:05:55.255 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.255 [2024-07-23 13:47:45.967726] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.255 [2024-07-23 13:47:46.037326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.635 13:47:47 -- accel/accel.sh@18 -- # out=' 00:05:56.636 SPDK Configuration: 00:05:56.636 Core mask: 0x1 00:05:56.636 00:05:56.636 Accel Perf Configuration: 00:05:56.636 Workload Type: copy_crc32c 00:05:56.636 CRC-32C seed: 0 00:05:56.636 Vector size: 4096 bytes 00:05:56.636 Transfer size: 8192 bytes 00:05:56.636 Vector count 2 00:05:56.636 Module: software 00:05:56.636 Queue depth: 32 00:05:56.636 Allocate depth: 32 00:05:56.636 # threads/core: 1 00:05:56.636 Run time: 1 seconds 00:05:56.636 Verify: Yes 00:05:56.636 00:05:56.636 Running for 1 seconds... 00:05:56.636 00:05:56.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:56.636 ------------------------------------------------------------------------------------ 00:05:56.636 0,0 234368/s 1831 MiB/s 0 0 00:05:56.636 ==================================================================================== 00:05:56.636 Total 234368/s 915 MiB/s 0 0' 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:56.636 13:47:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:56.636 13:47:47 -- accel/accel.sh@12 -- # build_accel_config 00:05:56.636 13:47:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:56.636 13:47:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:56.636 13:47:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:56.636 13:47:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:56.636 13:47:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:56.636 13:47:47 -- accel/accel.sh@41 -- # local IFS=, 00:05:56.636 13:47:47 -- accel/accel.sh@42 -- # jq -r . 00:05:56.636 [2024-07-23 13:47:47.263166] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:56.636 [2024-07-23 13:47:47.263238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092318 ] 00:05:56.636 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.636 [2024-07-23 13:47:47.319326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.636 [2024-07-23 13:47:47.387452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=0x1 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=0 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=software 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@23 -- # accel_module=software 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=32 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=32 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=1 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val=Yes 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:56.636 13:47:47 -- accel/accel.sh@21 -- # val= 00:05:56.636 13:47:47 -- accel/accel.sh@22 -- # case "$var" in 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # IFS=: 00:05:56.636 13:47:47 -- accel/accel.sh@20 -- # read -r var val 00:05:57.571 13:47:48 -- accel/accel.sh@21 -- # val= 00:05:57.571 13:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # IFS=: 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # read -r var val 00:05:57.571 13:47:48 -- accel/accel.sh@21 -- # val= 00:05:57.571 13:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # IFS=: 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # read -r var val 00:05:57.571 13:47:48 -- accel/accel.sh@21 -- # val= 00:05:57.571 13:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # IFS=: 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # read -r var val 00:05:57.571 13:47:48 -- accel/accel.sh@21 -- # val= 00:05:57.571 13:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.571 13:47:48 -- accel/accel.sh@20 -- # IFS=: 00:05:57.830 13:47:48 -- accel/accel.sh@20 -- # read -r var val 00:05:57.830 13:47:48 -- accel/accel.sh@21 -- # val= 00:05:57.830 13:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.830 13:47:48 -- accel/accel.sh@20 -- # IFS=: 00:05:57.830 13:47:48 -- accel/accel.sh@20 -- # read -r var val 00:05:57.830 13:47:48 -- accel/accel.sh@21 -- # val= 00:05:57.830 13:47:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.830 13:47:48 -- accel/accel.sh@20 -- # IFS=: 00:05:57.830 13:47:48 -- accel/accel.sh@20 -- # read -r var val 00:05:57.830 13:47:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:57.830 13:47:48 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:57.830 13:47:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:57.830 00:05:57.830 real 0m2.703s 00:05:57.830 user 0m2.486s 00:05:57.830 sys 0m0.224s 00:05:57.830 13:47:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.830 13:47:48 -- common/autotest_common.sh@10 -- # set +x 00:05:57.830 ************************************ 00:05:57.830 END TEST accel_copy_crc32c_C2 00:05:57.830 ************************************ 00:05:57.830 13:47:48 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:57.830 13:47:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:57.830 13:47:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.830 13:47:48 -- common/autotest_common.sh@10 -- # set +x 00:05:57.830 ************************************ 00:05:57.830 START TEST accel_dualcast 00:05:57.830 ************************************ 00:05:57.830 13:47:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:57.830 13:47:48 -- accel/accel.sh@16 -- # local accel_opc 00:05:57.830 13:47:48 -- accel/accel.sh@17 -- # local accel_module 00:05:57.830 13:47:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:57.830 13:47:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:57.830 13:47:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.830 13:47:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.830 13:47:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.830 13:47:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.830 13:47:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.830 13:47:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.830 13:47:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.830 13:47:48 -- accel/accel.sh@42 -- # jq -r . 00:05:57.830 [2024-07-23 13:47:48.655858] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:57.830 [2024-07-23 13:47:48.655915] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092565 ] 00:05:57.830 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.830 [2024-07-23 13:47:48.711715] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.830 [2024-07-23 13:47:48.779594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.209 13:47:49 -- accel/accel.sh@18 -- # out=' 00:05:59.210 SPDK Configuration: 00:05:59.210 Core mask: 0x1 00:05:59.210 00:05:59.210 Accel Perf Configuration: 00:05:59.210 Workload Type: dualcast 00:05:59.210 Transfer size: 4096 bytes 00:05:59.210 Vector count 1 00:05:59.210 Module: software 00:05:59.210 Queue depth: 32 00:05:59.210 Allocate depth: 32 00:05:59.210 # threads/core: 1 00:05:59.210 Run time: 1 seconds 00:05:59.210 Verify: Yes 00:05:59.210 00:05:59.210 Running for 1 seconds... 00:05:59.210 00:05:59.210 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:59.210 ------------------------------------------------------------------------------------ 00:05:59.210 0,0 502976/s 1964 MiB/s 0 0 00:05:59.210 ==================================================================================== 00:05:59.210 Total 502976/s 1964 MiB/s 0 0' 00:05:59.210 13:47:49 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:49 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:59.210 13:47:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:59.210 13:47:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:59.210 13:47:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:59.210 13:47:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:59.210 13:47:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:59.210 13:47:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:59.210 13:47:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:59.210 13:47:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:59.210 13:47:49 -- accel/accel.sh@42 -- # jq -r . 00:05:59.210 [2024-07-23 13:47:50.005385] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:59.210 [2024-07-23 13:47:50.005460] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3092812 ] 00:05:59.210 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.210 [2024-07-23 13:47:50.062288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.210 [2024-07-23 13:47:50.135027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=0x1 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=dualcast 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=software 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@23 -- # accel_module=software 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=32 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=32 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=1 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val=Yes 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:05:59.210 13:47:50 -- accel/accel.sh@21 -- # val= 00:05:59.210 13:47:50 -- accel/accel.sh@22 -- # case "$var" in 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # IFS=: 00:05:59.210 13:47:50 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@21 -- # val= 00:06:00.590 13:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # IFS=: 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@21 -- # val= 00:06:00.590 13:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # IFS=: 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@21 -- # val= 00:06:00.590 13:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # IFS=: 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@21 -- # val= 00:06:00.590 13:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # IFS=: 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@21 -- # val= 00:06:00.590 13:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # IFS=: 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@21 -- # val= 00:06:00.590 13:47:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # IFS=: 00:06:00.590 13:47:51 -- accel/accel.sh@20 -- # read -r var val 00:06:00.590 13:47:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:00.590 13:47:51 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:00.590 13:47:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:00.590 00:06:00.590 real 0m2.708s 00:06:00.590 user 0m2.493s 00:06:00.590 sys 0m0.221s 00:06:00.590 13:47:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.590 13:47:51 -- common/autotest_common.sh@10 -- # set +x 00:06:00.590 ************************************ 00:06:00.590 END TEST accel_dualcast 00:06:00.590 ************************************ 00:06:00.590 13:47:51 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:00.590 13:47:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:00.590 13:47:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.590 13:47:51 -- common/autotest_common.sh@10 -- # set +x 00:06:00.590 ************************************ 00:06:00.590 START TEST accel_compare 00:06:00.590 ************************************ 00:06:00.590 13:47:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:00.590 13:47:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:00.590 13:47:51 -- accel/accel.sh@17 -- # local accel_module 00:06:00.590 13:47:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:00.590 13:47:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:00.590 13:47:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.591 13:47:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.591 13:47:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.591 13:47:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.591 13:47:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.591 13:47:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.591 13:47:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.591 13:47:51 -- accel/accel.sh@42 -- # jq -r . 00:06:00.591 [2024-07-23 13:47:51.397445] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:00.591 [2024-07-23 13:47:51.397518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093059 ] 00:06:00.591 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.591 [2024-07-23 13:47:51.452933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.591 [2024-07-23 13:47:51.521827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.968 13:47:52 -- accel/accel.sh@18 -- # out=' 00:06:01.968 SPDK Configuration: 00:06:01.968 Core mask: 0x1 00:06:01.968 00:06:01.968 Accel Perf Configuration: 00:06:01.968 Workload Type: compare 00:06:01.968 Transfer size: 4096 bytes 00:06:01.968 Vector count 1 00:06:01.968 Module: software 00:06:01.968 Queue depth: 32 00:06:01.968 Allocate depth: 32 00:06:01.968 # threads/core: 1 00:06:01.968 Run time: 1 seconds 00:06:01.968 Verify: Yes 00:06:01.968 00:06:01.968 Running for 1 seconds... 00:06:01.968 00:06:01.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:01.968 ------------------------------------------------------------------------------------ 00:06:01.968 0,0 608352/s 2376 MiB/s 0 0 00:06:01.968 ==================================================================================== 00:06:01.968 Total 608352/s 2376 MiB/s 0 0' 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:01.968 13:47:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:01.968 13:47:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.968 13:47:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.968 13:47:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.968 13:47:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.968 13:47:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.968 13:47:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.968 13:47:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.968 13:47:52 -- accel/accel.sh@42 -- # jq -r . 00:06:01.968 [2024-07-23 13:47:52.745312] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.968 [2024-07-23 13:47:52.745386] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093294 ] 00:06:01.968 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.968 [2024-07-23 13:47:52.799056] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.968 [2024-07-23 13:47:52.866338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=0x1 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=compare 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=software 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@23 -- # accel_module=software 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=32 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=32 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=1 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val=Yes 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:01.968 13:47:52 -- accel/accel.sh@21 -- # val= 00:06:01.968 13:47:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # IFS=: 00:06:01.968 13:47:52 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@21 -- # val= 00:06:03.351 13:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # IFS=: 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@21 -- # val= 00:06:03.351 13:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # IFS=: 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@21 -- # val= 00:06:03.351 13:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # IFS=: 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@21 -- # val= 00:06:03.351 13:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # IFS=: 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@21 -- # val= 00:06:03.351 13:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # IFS=: 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@21 -- # val= 00:06:03.351 13:47:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # IFS=: 00:06:03.351 13:47:54 -- accel/accel.sh@20 -- # read -r var val 00:06:03.351 13:47:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:03.351 13:47:54 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:03.351 13:47:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:03.351 00:06:03.351 real 0m2.700s 00:06:03.351 user 0m2.486s 00:06:03.351 sys 0m0.219s 00:06:03.351 13:47:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.351 13:47:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.351 ************************************ 00:06:03.351 END TEST accel_compare 00:06:03.351 ************************************ 00:06:03.351 13:47:54 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:03.351 13:47:54 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:03.351 13:47:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:03.351 13:47:54 -- common/autotest_common.sh@10 -- # set +x 00:06:03.351 ************************************ 00:06:03.351 START TEST accel_xor 00:06:03.351 ************************************ 00:06:03.351 13:47:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:03.351 13:47:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:03.351 13:47:54 -- accel/accel.sh@17 -- # local accel_module 00:06:03.351 13:47:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:03.351 13:47:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:03.351 13:47:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.351 13:47:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.351 13:47:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.351 13:47:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.351 13:47:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.351 13:47:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.351 13:47:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.351 13:47:54 -- accel/accel.sh@42 -- # jq -r . 00:06:03.351 [2024-07-23 13:47:54.120455] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:03.351 [2024-07-23 13:47:54.120501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093547 ] 00:06:03.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.351 [2024-07-23 13:47:54.172338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.351 [2024-07-23 13:47:54.241362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.730 13:47:55 -- accel/accel.sh@18 -- # out=' 00:06:04.730 SPDK Configuration: 00:06:04.730 Core mask: 0x1 00:06:04.730 00:06:04.730 Accel Perf Configuration: 00:06:04.730 Workload Type: xor 00:06:04.730 Source buffers: 2 00:06:04.730 Transfer size: 4096 bytes 00:06:04.730 Vector count 1 00:06:04.730 Module: software 00:06:04.730 Queue depth: 32 00:06:04.730 Allocate depth: 32 00:06:04.730 # threads/core: 1 00:06:04.730 Run time: 1 seconds 00:06:04.730 Verify: Yes 00:06:04.730 00:06:04.730 Running for 1 seconds... 00:06:04.730 00:06:04.730 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:04.730 ------------------------------------------------------------------------------------ 00:06:04.731 0,0 481344/s 1880 MiB/s 0 0 00:06:04.731 ==================================================================================== 00:06:04.731 Total 481344/s 1880 MiB/s 0 0' 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:04.731 13:47:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:04.731 13:47:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.731 13:47:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.731 13:47:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.731 13:47:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.731 13:47:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.731 13:47:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.731 13:47:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.731 13:47:55 -- accel/accel.sh@42 -- # jq -r . 00:06:04.731 [2024-07-23 13:47:55.468293] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:04.731 [2024-07-23 13:47:55.468362] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3093784 ] 00:06:04.731 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.731 [2024-07-23 13:47:55.523886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.731 [2024-07-23 13:47:55.591006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=0x1 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=xor 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=2 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=software 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=32 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=32 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=1 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val=Yes 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:04.731 13:47:55 -- accel/accel.sh@21 -- # val= 00:06:04.731 13:47:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # IFS=: 00:06:04.731 13:47:55 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@21 -- # val= 00:06:06.111 13:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # IFS=: 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@21 -- # val= 00:06:06.111 13:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # IFS=: 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@21 -- # val= 00:06:06.111 13:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # IFS=: 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@21 -- # val= 00:06:06.111 13:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # IFS=: 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@21 -- # val= 00:06:06.111 13:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # IFS=: 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@21 -- # val= 00:06:06.111 13:47:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # IFS=: 00:06:06.111 13:47:56 -- accel/accel.sh@20 -- # read -r var val 00:06:06.111 13:47:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:06.111 13:47:56 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:06.111 13:47:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:06.111 00:06:06.111 real 0m2.690s 00:06:06.111 user 0m2.477s 00:06:06.111 sys 0m0.221s 00:06:06.111 13:47:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.111 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.111 ************************************ 00:06:06.111 END TEST accel_xor 00:06:06.111 ************************************ 00:06:06.111 13:47:56 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:06.111 13:47:56 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:06.111 13:47:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.111 13:47:56 -- common/autotest_common.sh@10 -- # set +x 00:06:06.111 ************************************ 00:06:06.111 START TEST accel_xor 00:06:06.111 ************************************ 00:06:06.111 13:47:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:06.111 13:47:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.111 13:47:56 -- accel/accel.sh@17 -- # local accel_module 00:06:06.111 13:47:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:06.111 13:47:56 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:06.111 13:47:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.111 13:47:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.111 13:47:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.111 13:47:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.111 13:47:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.111 13:47:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.111 13:47:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.111 13:47:56 -- accel/accel.sh@42 -- # jq -r . 00:06:06.111 [2024-07-23 13:47:56.851755] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:06.111 [2024-07-23 13:47:56.851811] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094031 ] 00:06:06.111 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.111 [2024-07-23 13:47:56.905333] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.112 [2024-07-23 13:47:56.973786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.491 13:47:58 -- accel/accel.sh@18 -- # out=' 00:06:07.491 SPDK Configuration: 00:06:07.491 Core mask: 0x1 00:06:07.491 00:06:07.491 Accel Perf Configuration: 00:06:07.491 Workload Type: xor 00:06:07.491 Source buffers: 3 00:06:07.491 Transfer size: 4096 bytes 00:06:07.491 Vector count 1 00:06:07.491 Module: software 00:06:07.491 Queue depth: 32 00:06:07.491 Allocate depth: 32 00:06:07.491 # threads/core: 1 00:06:07.491 Run time: 1 seconds 00:06:07.491 Verify: Yes 00:06:07.491 00:06:07.491 Running for 1 seconds... 00:06:07.491 00:06:07.491 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:07.491 ------------------------------------------------------------------------------------ 00:06:07.491 0,0 457184/s 1785 MiB/s 0 0 00:06:07.491 ==================================================================================== 00:06:07.491 Total 457184/s 1785 MiB/s 0 0' 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.491 13:47:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:07.491 13:47:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:07.491 13:47:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:07.491 13:47:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.491 13:47:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.491 13:47:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.491 13:47:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.491 13:47:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.491 13:47:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.491 13:47:58 -- accel/accel.sh@42 -- # jq -r . 00:06:07.491 [2024-07-23 13:47:58.197122] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:07.491 [2024-07-23 13:47:58.197180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094269 ] 00:06:07.491 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.491 [2024-07-23 13:47:58.249669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.491 [2024-07-23 13:47:58.316978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.491 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.491 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.491 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.491 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.491 13:47:58 -- accel/accel.sh@21 -- # val=0x1 00:06:07.491 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.491 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.491 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.491 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.491 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.491 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.491 13:47:58 -- accel/accel.sh@21 -- # val=xor 00:06:07.491 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val=3 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val=software 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val=32 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val=32 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val=1 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val=Yes 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:07.492 13:47:58 -- accel/accel.sh@21 -- # val= 00:06:07.492 13:47:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # IFS=: 00:06:07.492 13:47:58 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@21 -- # val= 00:06:08.871 13:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@21 -- # val= 00:06:08.871 13:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@21 -- # val= 00:06:08.871 13:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@21 -- # val= 00:06:08.871 13:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@21 -- # val= 00:06:08.871 13:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@21 -- # val= 00:06:08.871 13:47:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # IFS=: 00:06:08.871 13:47:59 -- accel/accel.sh@20 -- # read -r var val 00:06:08.871 13:47:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:08.871 13:47:59 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:08.871 13:47:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.871 00:06:08.871 real 0m2.694s 00:06:08.871 user 0m2.482s 00:06:08.871 sys 0m0.220s 00:06:08.871 13:47:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.871 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.871 ************************************ 00:06:08.871 END TEST accel_xor 00:06:08.871 ************************************ 00:06:08.871 13:47:59 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:08.871 13:47:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:08.871 13:47:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.871 13:47:59 -- common/autotest_common.sh@10 -- # set +x 00:06:08.871 ************************************ 00:06:08.871 START TEST accel_dif_verify 00:06:08.871 ************************************ 00:06:08.871 13:47:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:08.871 13:47:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:08.871 13:47:59 -- accel/accel.sh@17 -- # local accel_module 00:06:08.871 13:47:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:08.871 13:47:59 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:08.871 13:47:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.871 13:47:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.871 13:47:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.871 13:47:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.871 13:47:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.871 13:47:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.871 13:47:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.871 13:47:59 -- accel/accel.sh@42 -- # jq -r . 00:06:08.871 [2024-07-23 13:47:59.580193] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:08.871 [2024-07-23 13:47:59.580267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094518 ] 00:06:08.871 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.871 [2024-07-23 13:47:59.634353] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.871 [2024-07-23 13:47:59.703319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.252 13:48:00 -- accel/accel.sh@18 -- # out=' 00:06:10.252 SPDK Configuration: 00:06:10.252 Core mask: 0x1 00:06:10.252 00:06:10.252 Accel Perf Configuration: 00:06:10.252 Workload Type: dif_verify 00:06:10.252 Vector size: 4096 bytes 00:06:10.252 Transfer size: 4096 bytes 00:06:10.252 Block size: 512 bytes 00:06:10.252 Metadata size: 8 bytes 00:06:10.252 Vector count 1 00:06:10.252 Module: software 00:06:10.252 Queue depth: 32 00:06:10.252 Allocate depth: 32 00:06:10.252 # threads/core: 1 00:06:10.252 Run time: 1 seconds 00:06:10.252 Verify: No 00:06:10.252 00:06:10.252 Running for 1 seconds... 00:06:10.252 00:06:10.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.252 ------------------------------------------------------------------------------------ 00:06:10.252 0,0 122560/s 486 MiB/s 0 0 00:06:10.252 ==================================================================================== 00:06:10.252 Total 122560/s 478 MiB/s 0 0' 00:06:10.252 13:48:00 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:00 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:10.252 13:48:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:10.252 13:48:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.252 13:48:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.252 13:48:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.252 13:48:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.252 13:48:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.252 13:48:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.252 13:48:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.252 13:48:00 -- accel/accel.sh@42 -- # jq -r . 00:06:10.252 [2024-07-23 13:48:00.928178] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:10.252 [2024-07-23 13:48:00.928242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094804 ] 00:06:10.252 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.252 [2024-07-23 13:48:00.982468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.252 [2024-07-23 13:48:01.050214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=0x1 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=dif_verify 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=software 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=32 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=32 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=1 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.252 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.252 13:48:01 -- accel/accel.sh@21 -- # val=No 00:06:10.252 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.253 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.253 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.253 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.253 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.253 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.253 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:10.253 13:48:01 -- accel/accel.sh@21 -- # val= 00:06:10.253 13:48:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:10.253 13:48:01 -- accel/accel.sh@20 -- # IFS=: 00:06:10.253 13:48:01 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@21 -- # val= 00:06:11.634 13:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@21 -- # val= 00:06:11.634 13:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@21 -- # val= 00:06:11.634 13:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@21 -- # val= 00:06:11.634 13:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@21 -- # val= 00:06:11.634 13:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@21 -- # val= 00:06:11.634 13:48:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # IFS=: 00:06:11.634 13:48:02 -- accel/accel.sh@20 -- # read -r var val 00:06:11.634 13:48:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:11.634 13:48:02 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:11.634 13:48:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.634 00:06:11.634 real 0m2.701s 00:06:11.634 user 0m2.484s 00:06:11.634 sys 0m0.224s 00:06:11.634 13:48:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.634 13:48:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.634 ************************************ 00:06:11.634 END TEST accel_dif_verify 00:06:11.634 ************************************ 00:06:11.634 13:48:02 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:11.634 13:48:02 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:11.634 13:48:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.634 13:48:02 -- common/autotest_common.sh@10 -- # set +x 00:06:11.634 ************************************ 00:06:11.634 START TEST accel_dif_generate 00:06:11.634 ************************************ 00:06:11.634 13:48:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:11.634 13:48:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.634 13:48:02 -- accel/accel.sh@17 -- # local accel_module 00:06:11.634 13:48:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:11.634 13:48:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:11.634 13:48:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.634 13:48:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.634 13:48:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.634 13:48:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.634 13:48:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.634 13:48:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.634 13:48:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.634 13:48:02 -- accel/accel.sh@42 -- # jq -r . 00:06:11.634 [2024-07-23 13:48:02.297682] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:11.634 [2024-07-23 13:48:02.297727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095133 ] 00:06:11.634 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.634 [2024-07-23 13:48:02.349711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.634 [2024-07-23 13:48:02.418513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.013 13:48:03 -- accel/accel.sh@18 -- # out=' 00:06:13.013 SPDK Configuration: 00:06:13.013 Core mask: 0x1 00:06:13.013 00:06:13.013 Accel Perf Configuration: 00:06:13.013 Workload Type: dif_generate 00:06:13.013 Vector size: 4096 bytes 00:06:13.013 Transfer size: 4096 bytes 00:06:13.013 Block size: 512 bytes 00:06:13.013 Metadata size: 8 bytes 00:06:13.013 Vector count 1 00:06:13.013 Module: software 00:06:13.013 Queue depth: 32 00:06:13.013 Allocate depth: 32 00:06:13.013 # threads/core: 1 00:06:13.013 Run time: 1 seconds 00:06:13.013 Verify: No 00:06:13.013 00:06:13.013 Running for 1 seconds... 00:06:13.013 00:06:13.013 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.013 ------------------------------------------------------------------------------------ 00:06:13.013 0,0 158688/s 629 MiB/s 0 0 00:06:13.013 ==================================================================================== 00:06:13.013 Total 158688/s 619 MiB/s 0 0' 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:13.013 13:48:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:13.013 13:48:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.013 13:48:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.013 13:48:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.013 13:48:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.013 13:48:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.013 13:48:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.013 13:48:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.013 13:48:03 -- accel/accel.sh@42 -- # jq -r . 00:06:13.013 [2024-07-23 13:48:03.642004] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:13.013 [2024-07-23 13:48:03.642081] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095369 ] 00:06:13.013 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.013 [2024-07-23 13:48:03.697873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.013 [2024-07-23 13:48:03.764670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=0x1 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=dif_generate 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=software 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=32 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=32 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=1 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val=No 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.013 13:48:03 -- accel/accel.sh@21 -- # val= 00:06:13.013 13:48:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # IFS=: 00:06:13.013 13:48:03 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@21 -- # val= 00:06:13.953 13:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # IFS=: 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@21 -- # val= 00:06:13.953 13:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # IFS=: 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@21 -- # val= 00:06:13.953 13:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # IFS=: 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@21 -- # val= 00:06:13.953 13:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # IFS=: 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@21 -- # val= 00:06:13.953 13:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # IFS=: 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@21 -- # val= 00:06:13.953 13:48:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # IFS=: 00:06:13.953 13:48:04 -- accel/accel.sh@20 -- # read -r var val 00:06:13.953 13:48:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:13.953 13:48:04 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:13.953 13:48:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:13.953 00:06:13.953 real 0m2.682s 00:06:13.953 user 0m2.470s 00:06:13.953 sys 0m0.219s 00:06:13.953 13:48:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.953 13:48:04 -- common/autotest_common.sh@10 -- # set +x 00:06:13.953 ************************************ 00:06:13.953 END TEST accel_dif_generate 00:06:13.953 ************************************ 00:06:14.223 13:48:04 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:14.223 13:48:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:14.223 13:48:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.223 13:48:04 -- common/autotest_common.sh@10 -- # set +x 00:06:14.223 ************************************ 00:06:14.223 START TEST accel_dif_generate_copy 00:06:14.223 ************************************ 00:06:14.223 13:48:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:14.223 13:48:05 -- accel/accel.sh@16 -- # local accel_opc 00:06:14.223 13:48:05 -- accel/accel.sh@17 -- # local accel_module 00:06:14.223 13:48:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:14.223 13:48:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:14.223 13:48:05 -- accel/accel.sh@12 -- # build_accel_config 00:06:14.223 13:48:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:14.223 13:48:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.223 13:48:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.223 13:48:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:14.223 13:48:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:14.223 13:48:05 -- accel/accel.sh@41 -- # local IFS=, 00:06:14.223 13:48:05 -- accel/accel.sh@42 -- # jq -r . 00:06:14.223 [2024-07-23 13:48:05.025750] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:14.223 [2024-07-23 13:48:05.025815] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095618 ] 00:06:14.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.223 [2024-07-23 13:48:05.079242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.223 [2024-07-23 13:48:05.147813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.646 13:48:06 -- accel/accel.sh@18 -- # out=' 00:06:15.646 SPDK Configuration: 00:06:15.646 Core mask: 0x1 00:06:15.646 00:06:15.646 Accel Perf Configuration: 00:06:15.646 Workload Type: dif_generate_copy 00:06:15.646 Vector size: 4096 bytes 00:06:15.646 Transfer size: 4096 bytes 00:06:15.646 Vector count 1 00:06:15.646 Module: software 00:06:15.646 Queue depth: 32 00:06:15.646 Allocate depth: 32 00:06:15.646 # threads/core: 1 00:06:15.646 Run time: 1 seconds 00:06:15.646 Verify: No 00:06:15.646 00:06:15.646 Running for 1 seconds... 00:06:15.646 00:06:15.646 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.646 ------------------------------------------------------------------------------------ 00:06:15.646 0,0 122656/s 486 MiB/s 0 0 00:06:15.646 ==================================================================================== 00:06:15.646 Total 122656/s 479 MiB/s 0 0' 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:15.646 13:48:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:15.646 13:48:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.646 13:48:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.646 13:48:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.646 13:48:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.646 13:48:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.646 13:48:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.646 13:48:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.646 13:48:06 -- accel/accel.sh@42 -- # jq -r . 00:06:15.646 [2024-07-23 13:48:06.374993] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:15.646 [2024-07-23 13:48:06.375080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3095861 ] 00:06:15.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.646 [2024-07-23 13:48:06.429787] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.646 [2024-07-23 13:48:06.496816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=0x1 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=software 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@23 -- # accel_module=software 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=32 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=32 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=1 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val=No 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:15.646 13:48:06 -- accel/accel.sh@21 -- # val= 00:06:15.646 13:48:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # IFS=: 00:06:15.646 13:48:06 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@21 -- # val= 00:06:17.025 13:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # IFS=: 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@21 -- # val= 00:06:17.025 13:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # IFS=: 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@21 -- # val= 00:06:17.025 13:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # IFS=: 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@21 -- # val= 00:06:17.025 13:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # IFS=: 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@21 -- # val= 00:06:17.025 13:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # IFS=: 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@21 -- # val= 00:06:17.025 13:48:07 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # IFS=: 00:06:17.025 13:48:07 -- accel/accel.sh@20 -- # read -r var val 00:06:17.025 13:48:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.025 13:48:07 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:17.025 13:48:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.025 00:06:17.025 real 0m2.699s 00:06:17.025 user 0m2.482s 00:06:17.025 sys 0m0.221s 00:06:17.025 13:48:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.025 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:17.025 ************************************ 00:06:17.025 END TEST accel_dif_generate_copy 00:06:17.025 ************************************ 00:06:17.025 13:48:07 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:17.025 13:48:07 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.025 13:48:07 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:17.025 13:48:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.025 13:48:07 -- common/autotest_common.sh@10 -- # set +x 00:06:17.025 ************************************ 00:06:17.025 START TEST accel_comp 00:06:17.025 ************************************ 00:06:17.025 13:48:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.025 13:48:07 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.025 13:48:07 -- accel/accel.sh@17 -- # local accel_module 00:06:17.025 13:48:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.026 13:48:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:17.026 13:48:07 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.026 13:48:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.026 13:48:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.026 13:48:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.026 13:48:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.026 13:48:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.026 13:48:07 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.026 13:48:07 -- accel/accel.sh@42 -- # jq -r . 00:06:17.026 [2024-07-23 13:48:07.752640] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:17.026 [2024-07-23 13:48:07.752698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096111 ] 00:06:17.026 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.026 [2024-07-23 13:48:07.805698] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.026 [2024-07-23 13:48:07.874216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.405 13:48:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:18.405 00:06:18.405 SPDK Configuration: 00:06:18.405 Core mask: 0x1 00:06:18.405 00:06:18.405 Accel Perf Configuration: 00:06:18.405 Workload Type: compress 00:06:18.405 Transfer size: 4096 bytes 00:06:18.405 Vector count 1 00:06:18.405 Module: software 00:06:18.405 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.405 Queue depth: 32 00:06:18.405 Allocate depth: 32 00:06:18.405 # threads/core: 1 00:06:18.405 Run time: 1 seconds 00:06:18.405 Verify: No 00:06:18.405 00:06:18.405 Running for 1 seconds... 00:06:18.405 00:06:18.405 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:18.405 ------------------------------------------------------------------------------------ 00:06:18.405 0,0 61248/s 255 MiB/s 0 0 00:06:18.405 ==================================================================================== 00:06:18.405 Total 61248/s 239 MiB/s 0 0' 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.405 13:48:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.405 13:48:09 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.405 13:48:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.405 13:48:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.405 13:48:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.405 13:48:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.405 13:48:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.405 13:48:09 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.405 13:48:09 -- accel/accel.sh@42 -- # jq -r . 00:06:18.405 [2024-07-23 13:48:09.101207] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:18.405 [2024-07-23 13:48:09.101267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096585 ] 00:06:18.405 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.405 [2024-07-23 13:48:09.154023] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.405 [2024-07-23 13:48:09.221106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val=0x1 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val=compress 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val=software 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.405 13:48:09 -- accel/accel.sh@23 -- # accel_module=software 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.405 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.405 13:48:09 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:18.405 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val=32 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val=32 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val=1 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val=No 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:18.406 13:48:09 -- accel/accel.sh@21 -- # val= 00:06:18.406 13:48:09 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # IFS=: 00:06:18.406 13:48:09 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@21 -- # val= 00:06:19.785 13:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # IFS=: 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@21 -- # val= 00:06:19.785 13:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # IFS=: 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@21 -- # val= 00:06:19.785 13:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # IFS=: 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@21 -- # val= 00:06:19.785 13:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # IFS=: 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@21 -- # val= 00:06:19.785 13:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # IFS=: 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@21 -- # val= 00:06:19.785 13:48:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # IFS=: 00:06:19.785 13:48:10 -- accel/accel.sh@20 -- # read -r var val 00:06:19.785 13:48:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:19.785 13:48:10 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:19.785 13:48:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.785 00:06:19.785 real 0m2.700s 00:06:19.785 user 0m2.484s 00:06:19.785 sys 0m0.222s 00:06:19.785 13:48:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.785 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:06:19.785 ************************************ 00:06:19.785 END TEST accel_comp 00:06:19.785 ************************************ 00:06:19.785 13:48:10 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.785 13:48:10 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:19.785 13:48:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:19.785 13:48:10 -- common/autotest_common.sh@10 -- # set +x 00:06:19.785 ************************************ 00:06:19.785 START TEST accel_decomp 00:06:19.785 ************************************ 00:06:19.785 13:48:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.785 13:48:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:19.785 13:48:10 -- accel/accel.sh@17 -- # local accel_module 00:06:19.785 13:48:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.785 13:48:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:19.785 13:48:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.785 13:48:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.785 13:48:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.785 13:48:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.785 13:48:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.785 13:48:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.785 13:48:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.785 13:48:10 -- accel/accel.sh@42 -- # jq -r . 00:06:19.785 [2024-07-23 13:48:10.485827] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:19.785 [2024-07-23 13:48:10.485901] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3096989 ] 00:06:19.785 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.785 [2024-07-23 13:48:10.539812] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.785 [2024-07-23 13:48:10.608205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.164 13:48:11 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:21.164 00:06:21.164 SPDK Configuration: 00:06:21.164 Core mask: 0x1 00:06:21.164 00:06:21.164 Accel Perf Configuration: 00:06:21.164 Workload Type: decompress 00:06:21.164 Transfer size: 4096 bytes 00:06:21.164 Vector count 1 00:06:21.164 Module: software 00:06:21.164 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.164 Queue depth: 32 00:06:21.164 Allocate depth: 32 00:06:21.164 # threads/core: 1 00:06:21.164 Run time: 1 seconds 00:06:21.164 Verify: Yes 00:06:21.164 00:06:21.164 Running for 1 seconds... 00:06:21.164 00:06:21.164 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:21.164 ------------------------------------------------------------------------------------ 00:06:21.164 0,0 73280/s 135 MiB/s 0 0 00:06:21.164 ==================================================================================== 00:06:21.164 Total 73280/s 286 MiB/s 0 0' 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:21.164 13:48:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:21.164 13:48:11 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.164 13:48:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.164 13:48:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.164 13:48:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.164 13:48:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.164 13:48:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.164 13:48:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.164 13:48:11 -- accel/accel.sh@42 -- # jq -r . 00:06:21.164 [2024-07-23 13:48:11.832999] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:21.164 [2024-07-23 13:48:11.833068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097223 ] 00:06:21.164 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.164 [2024-07-23 13:48:11.885813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.164 [2024-07-23 13:48:11.953630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.164 13:48:11 -- accel/accel.sh@21 -- # val= 00:06:21.164 13:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:11 -- accel/accel.sh@21 -- # val= 00:06:21.164 13:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:11 -- accel/accel.sh@21 -- # val= 00:06:21.164 13:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:11 -- accel/accel.sh@21 -- # val=0x1 00:06:21.164 13:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:11 -- accel/accel.sh@21 -- # val= 00:06:21.164 13:48:11 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:11 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val= 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val=decompress 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val= 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val=software 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.164 13:48:12 -- accel/accel.sh@21 -- # val=32 00:06:21.164 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.164 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.165 13:48:12 -- accel/accel.sh@21 -- # val=32 00:06:21.165 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.165 13:48:12 -- accel/accel.sh@21 -- # val=1 00:06:21.165 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.165 13:48:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:21.165 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.165 13:48:12 -- accel/accel.sh@21 -- # val=Yes 00:06:21.165 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.165 13:48:12 -- accel/accel.sh@21 -- # val= 00:06:21.165 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:21.165 13:48:12 -- accel/accel.sh@21 -- # val= 00:06:21.165 13:48:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # IFS=: 00:06:21.165 13:48:12 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@21 -- # val= 00:06:22.543 13:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # IFS=: 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@21 -- # val= 00:06:22.543 13:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # IFS=: 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@21 -- # val= 00:06:22.543 13:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # IFS=: 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@21 -- # val= 00:06:22.543 13:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # IFS=: 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@21 -- # val= 00:06:22.543 13:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # IFS=: 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@21 -- # val= 00:06:22.543 13:48:13 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # IFS=: 00:06:22.543 13:48:13 -- accel/accel.sh@20 -- # read -r var val 00:06:22.543 13:48:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:22.543 13:48:13 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:22.543 13:48:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.543 00:06:22.543 real 0m2.700s 00:06:22.543 user 0m2.487s 00:06:22.543 sys 0m0.222s 00:06:22.543 13:48:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.543 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 ************************************ 00:06:22.543 END TEST accel_decomp 00:06:22.543 ************************************ 00:06:22.543 13:48:13 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.543 13:48:13 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:22.543 13:48:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:22.543 13:48:13 -- common/autotest_common.sh@10 -- # set +x 00:06:22.543 ************************************ 00:06:22.543 START TEST accel_decmop_full 00:06:22.543 ************************************ 00:06:22.543 13:48:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.543 13:48:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:22.543 13:48:13 -- accel/accel.sh@17 -- # local accel_module 00:06:22.543 13:48:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.543 13:48:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:22.543 13:48:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.543 13:48:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.543 13:48:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.543 13:48:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.543 13:48:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.543 13:48:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.543 13:48:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.543 13:48:13 -- accel/accel.sh@42 -- # jq -r . 00:06:22.543 [2024-07-23 13:48:13.216924] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:22.543 [2024-07-23 13:48:13.216978] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097471 ] 00:06:22.543 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.543 [2024-07-23 13:48:13.270534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.543 [2024-07-23 13:48:13.338815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.923 13:48:14 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:23.923 00:06:23.923 SPDK Configuration: 00:06:23.923 Core mask: 0x1 00:06:23.923 00:06:23.923 Accel Perf Configuration: 00:06:23.923 Workload Type: decompress 00:06:23.923 Transfer size: 111250 bytes 00:06:23.923 Vector count 1 00:06:23.923 Module: software 00:06:23.923 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.923 Queue depth: 32 00:06:23.923 Allocate depth: 32 00:06:23.923 # threads/core: 1 00:06:23.923 Run time: 1 seconds 00:06:23.923 Verify: Yes 00:06:23.923 00:06:23.923 Running for 1 seconds... 00:06:23.923 00:06:23.923 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.923 ------------------------------------------------------------------------------------ 00:06:23.923 0,0 4832/s 199 MiB/s 0 0 00:06:23.923 ==================================================================================== 00:06:23.923 Total 4832/s 512 MiB/s 0 0' 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:23.923 13:48:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:23.923 13:48:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.923 13:48:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.923 13:48:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.923 13:48:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.923 13:48:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.923 13:48:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.923 13:48:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.923 13:48:14 -- accel/accel.sh@42 -- # jq -r . 00:06:23.923 [2024-07-23 13:48:14.576259] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:23.923 [2024-07-23 13:48:14.576339] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097711 ] 00:06:23.923 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.923 [2024-07-23 13:48:14.632901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.923 [2024-07-23 13:48:14.698726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val=0x1 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val=decompress 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.923 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.923 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.923 13:48:14 -- accel/accel.sh@21 -- # val=software 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val=32 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val=32 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val=1 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val=Yes 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:23.924 13:48:14 -- accel/accel.sh@21 -- # val= 00:06:23.924 13:48:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # IFS=: 00:06:23.924 13:48:14 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@21 -- # val= 00:06:25.305 13:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # IFS=: 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@21 -- # val= 00:06:25.305 13:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # IFS=: 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@21 -- # val= 00:06:25.305 13:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # IFS=: 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@21 -- # val= 00:06:25.305 13:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # IFS=: 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@21 -- # val= 00:06:25.305 13:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # IFS=: 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@21 -- # val= 00:06:25.305 13:48:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # IFS=: 00:06:25.305 13:48:15 -- accel/accel.sh@20 -- # read -r var val 00:06:25.305 13:48:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.305 13:48:15 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:25.305 13:48:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.305 00:06:25.305 real 0m2.722s 00:06:25.305 user 0m2.511s 00:06:25.305 sys 0m0.216s 00:06:25.305 13:48:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.305 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.305 ************************************ 00:06:25.305 END TEST accel_decmop_full 00:06:25.305 ************************************ 00:06:25.305 13:48:15 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.305 13:48:15 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:25.305 13:48:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:25.305 13:48:15 -- common/autotest_common.sh@10 -- # set +x 00:06:25.305 ************************************ 00:06:25.305 START TEST accel_decomp_mcore 00:06:25.305 ************************************ 00:06:25.305 13:48:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.305 13:48:15 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.305 13:48:15 -- accel/accel.sh@17 -- # local accel_module 00:06:25.305 13:48:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.305 13:48:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:25.305 13:48:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.305 13:48:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.305 13:48:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.305 13:48:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.305 13:48:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.305 13:48:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.305 13:48:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.305 13:48:15 -- accel/accel.sh@42 -- # jq -r . 00:06:25.305 [2024-07-23 13:48:15.974766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:25.305 [2024-07-23 13:48:15.974840] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3097958 ] 00:06:25.305 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.305 [2024-07-23 13:48:16.029047] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:25.305 [2024-07-23 13:48:16.099673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.305 [2024-07-23 13:48:16.099772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:25.305 [2024-07-23 13:48:16.099847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:25.305 [2024-07-23 13:48:16.099848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.684 13:48:17 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:26.684 00:06:26.684 SPDK Configuration: 00:06:26.684 Core mask: 0xf 00:06:26.684 00:06:26.684 Accel Perf Configuration: 00:06:26.684 Workload Type: decompress 00:06:26.684 Transfer size: 4096 bytes 00:06:26.684 Vector count 1 00:06:26.684 Module: software 00:06:26.684 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.684 Queue depth: 32 00:06:26.684 Allocate depth: 32 00:06:26.684 # threads/core: 1 00:06:26.684 Run time: 1 seconds 00:06:26.684 Verify: Yes 00:06:26.684 00:06:26.684 Running for 1 seconds... 00:06:26.684 00:06:26.684 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:26.684 ------------------------------------------------------------------------------------ 00:06:26.684 0,0 59776/s 110 MiB/s 0 0 00:06:26.684 3,0 61632/s 113 MiB/s 0 0 00:06:26.684 2,0 61696/s 113 MiB/s 0 0 00:06:26.684 1,0 61568/s 113 MiB/s 0 0 00:06:26.684 ==================================================================================== 00:06:26.684 Total 244672/s 955 MiB/s 0 0' 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.684 13:48:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.684 13:48:17 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.684 13:48:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.684 13:48:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.684 13:48:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.684 13:48:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.684 13:48:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.684 13:48:17 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.684 13:48:17 -- accel/accel.sh@42 -- # jq -r . 00:06:26.684 [2024-07-23 13:48:17.336440] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:26.684 [2024-07-23 13:48:17.336515] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098198 ] 00:06:26.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.684 [2024-07-23 13:48:17.391884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.684 [2024-07-23 13:48:17.462683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.684 [2024-07-23 13:48:17.462781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.684 [2024-07-23 13:48:17.462867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.684 [2024-07-23 13:48:17.462869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=0xf 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=decompress 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=software 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@23 -- # accel_module=software 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=32 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=32 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=1 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val=Yes 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:26.684 13:48:17 -- accel/accel.sh@21 -- # val= 00:06:26.684 13:48:17 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # IFS=: 00:06:26.684 13:48:17 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@21 -- # val= 00:06:28.061 13:48:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # IFS=: 00:06:28.061 13:48:18 -- accel/accel.sh@20 -- # read -r var val 00:06:28.061 13:48:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:28.061 13:48:18 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:28.061 13:48:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.061 00:06:28.061 real 0m2.730s 00:06:28.061 user 0m9.162s 00:06:28.061 sys 0m0.233s 00:06:28.061 13:48:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.061 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:28.061 ************************************ 00:06:28.061 END TEST accel_decomp_mcore 00:06:28.061 ************************************ 00:06:28.061 13:48:18 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.061 13:48:18 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:28.061 13:48:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.061 13:48:18 -- common/autotest_common.sh@10 -- # set +x 00:06:28.061 ************************************ 00:06:28.062 START TEST accel_decomp_full_mcore 00:06:28.062 ************************************ 00:06:28.062 13:48:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.062 13:48:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:28.062 13:48:18 -- accel/accel.sh@17 -- # local accel_module 00:06:28.062 13:48:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.062 13:48:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.062 13:48:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.062 13:48:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.062 13:48:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.062 13:48:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.062 13:48:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.062 13:48:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.062 13:48:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.062 13:48:18 -- accel/accel.sh@42 -- # jq -r . 00:06:28.062 [2024-07-23 13:48:18.735727] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:28.062 [2024-07-23 13:48:18.735781] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098458 ] 00:06:28.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.062 [2024-07-23 13:48:18.789010] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.062 [2024-07-23 13:48:18.859333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.062 [2024-07-23 13:48:18.859434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.062 [2024-07-23 13:48:18.859498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.062 [2024-07-23 13:48:18.859499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.439 13:48:20 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:29.439 00:06:29.439 SPDK Configuration: 00:06:29.439 Core mask: 0xf 00:06:29.439 00:06:29.439 Accel Perf Configuration: 00:06:29.439 Workload Type: decompress 00:06:29.439 Transfer size: 111250 bytes 00:06:29.439 Vector count 1 00:06:29.439 Module: software 00:06:29.439 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.439 Queue depth: 32 00:06:29.439 Allocate depth: 32 00:06:29.439 # threads/core: 1 00:06:29.439 Run time: 1 seconds 00:06:29.439 Verify: Yes 00:06:29.439 00:06:29.439 Running for 1 seconds... 00:06:29.439 00:06:29.439 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:29.439 ------------------------------------------------------------------------------------ 00:06:29.439 0,0 4512/s 186 MiB/s 0 0 00:06:29.439 3,0 4672/s 192 MiB/s 0 0 00:06:29.439 2,0 4640/s 191 MiB/s 0 0 00:06:29.439 1,0 4672/s 192 MiB/s 0 0 00:06:29.439 ==================================================================================== 00:06:29.439 Total 18496/s 1962 MiB/s 0 0' 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.439 13:48:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:29.439 13:48:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.439 13:48:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.439 13:48:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.439 13:48:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.439 13:48:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.439 13:48:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.439 13:48:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.439 13:48:20 -- accel/accel.sh@42 -- # jq -r . 00:06:29.439 [2024-07-23 13:48:20.108547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:29.439 [2024-07-23 13:48:20.108628] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098695 ] 00:06:29.439 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.439 [2024-07-23 13:48:20.162378] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:29.439 [2024-07-23 13:48:20.233736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.439 [2024-07-23 13:48:20.233836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.439 [2024-07-23 13:48:20.233914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:29.439 [2024-07-23 13:48:20.233916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val=0xf 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val=decompress 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val=software 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@23 -- # accel_module=software 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.439 13:48:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.439 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.439 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val=32 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val=32 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val=1 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val=Yes 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:29.440 13:48:20 -- accel/accel.sh@21 -- # val= 00:06:29.440 13:48:20 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # IFS=: 00:06:29.440 13:48:20 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.818 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.818 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.818 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.818 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.818 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.818 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.818 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.818 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.819 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.819 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.819 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.819 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.819 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.819 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.819 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.819 13:48:21 -- accel/accel.sh@21 -- # val= 00:06:30.819 13:48:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:30.819 13:48:21 -- accel/accel.sh@20 -- # IFS=: 00:06:30.819 13:48:21 -- accel/accel.sh@20 -- # read -r var val 00:06:30.819 13:48:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:30.819 13:48:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:30.819 13:48:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:30.819 00:06:30.819 real 0m2.750s 00:06:30.819 user 0m9.244s 00:06:30.819 sys 0m0.234s 00:06:30.819 13:48:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:30.819 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:30.819 ************************************ 00:06:30.819 END TEST accel_decomp_full_mcore 00:06:30.819 ************************************ 00:06:30.819 13:48:21 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.819 13:48:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:30.819 13:48:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.819 13:48:21 -- common/autotest_common.sh@10 -- # set +x 00:06:30.819 ************************************ 00:06:30.819 START TEST accel_decomp_mthread 00:06:30.819 ************************************ 00:06:30.819 13:48:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.819 13:48:21 -- accel/accel.sh@16 -- # local accel_opc 00:06:30.819 13:48:21 -- accel/accel.sh@17 -- # local accel_module 00:06:30.819 13:48:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.819 13:48:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:30.819 13:48:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.819 13:48:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:30.819 13:48:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:30.819 13:48:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.819 13:48:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:30.819 13:48:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:30.819 13:48:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:30.819 13:48:21 -- accel/accel.sh@42 -- # jq -r . 00:06:30.819 [2024-07-23 13:48:21.521269] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:30.819 [2024-07-23 13:48:21.521345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3098947 ] 00:06:30.819 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.819 [2024-07-23 13:48:21.575938] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.819 [2024-07-23 13:48:21.644241] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.198 13:48:22 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:32.198 00:06:32.198 SPDK Configuration: 00:06:32.198 Core mask: 0x1 00:06:32.198 00:06:32.198 Accel Perf Configuration: 00:06:32.198 Workload Type: decompress 00:06:32.198 Transfer size: 4096 bytes 00:06:32.198 Vector count 1 00:06:32.198 Module: software 00:06:32.198 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.198 Queue depth: 32 00:06:32.198 Allocate depth: 32 00:06:32.198 # threads/core: 2 00:06:32.198 Run time: 1 seconds 00:06:32.198 Verify: Yes 00:06:32.198 00:06:32.198 Running for 1 seconds... 00:06:32.198 00:06:32.198 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:32.198 ------------------------------------------------------------------------------------ 00:06:32.198 0,1 37216/s 68 MiB/s 0 0 00:06:32.198 0,0 37120/s 68 MiB/s 0 0 00:06:32.198 ==================================================================================== 00:06:32.198 Total 74336/s 290 MiB/s 0 0' 00:06:32.198 13:48:22 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:22 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:32.198 13:48:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:32.198 13:48:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.198 13:48:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.198 13:48:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.198 13:48:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.198 13:48:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.198 13:48:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.198 13:48:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.198 13:48:22 -- accel/accel.sh@42 -- # jq -r . 00:06:32.198 [2024-07-23 13:48:22.872711] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:32.198 [2024-07-23 13:48:22.872777] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099189 ] 00:06:32.198 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.198 [2024-07-23 13:48:22.926754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.198 [2024-07-23 13:48:22.995607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val=0x1 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val=decompress 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val=software 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@23 -- # accel_module=software 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:32.198 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.198 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.198 13:48:23 -- accel/accel.sh@21 -- # val=32 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.199 13:48:23 -- accel/accel.sh@21 -- # val=32 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.199 13:48:23 -- accel/accel.sh@21 -- # val=2 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.199 13:48:23 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.199 13:48:23 -- accel/accel.sh@21 -- # val=Yes 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.199 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:32.199 13:48:23 -- accel/accel.sh@21 -- # val= 00:06:32.199 13:48:23 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # IFS=: 00:06:32.199 13:48:23 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@21 -- # val= 00:06:33.577 13:48:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # IFS=: 00:06:33.577 13:48:24 -- accel/accel.sh@20 -- # read -r var val 00:06:33.577 13:48:24 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.577 13:48:24 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:33.577 13:48:24 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.577 00:06:33.577 real 0m2.711s 00:06:33.577 user 0m2.494s 00:06:33.577 sys 0m0.224s 00:06:33.577 13:48:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.577 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:06:33.577 ************************************ 00:06:33.577 END TEST accel_decomp_mthread 00:06:33.577 ************************************ 00:06:33.577 13:48:24 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.577 13:48:24 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:33.577 13:48:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.577 13:48:24 -- common/autotest_common.sh@10 -- # set +x 00:06:33.577 ************************************ 00:06:33.577 START TEST accel_deomp_full_mthread 00:06:33.577 ************************************ 00:06:33.577 13:48:24 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.577 13:48:24 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.577 13:48:24 -- accel/accel.sh@17 -- # local accel_module 00:06:33.577 13:48:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.577 13:48:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:33.577 13:48:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.577 13:48:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.577 13:48:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.577 13:48:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.577 13:48:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.577 13:48:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.577 13:48:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.577 13:48:24 -- accel/accel.sh@42 -- # jq -r . 00:06:33.577 [2024-07-23 13:48:24.264301] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:33.577 [2024-07-23 13:48:24.264357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099436 ] 00:06:33.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.577 [2024-07-23 13:48:24.316973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.577 [2024-07-23 13:48:24.385468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.958 13:48:25 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:34.958 00:06:34.958 SPDK Configuration: 00:06:34.958 Core mask: 0x1 00:06:34.958 00:06:34.958 Accel Perf Configuration: 00:06:34.958 Workload Type: decompress 00:06:34.958 Transfer size: 111250 bytes 00:06:34.958 Vector count 1 00:06:34.958 Module: software 00:06:34.958 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.958 Queue depth: 32 00:06:34.958 Allocate depth: 32 00:06:34.958 # threads/core: 2 00:06:34.958 Run time: 1 seconds 00:06:34.958 Verify: Yes 00:06:34.958 00:06:34.958 Running for 1 seconds... 00:06:34.958 00:06:34.958 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.958 ------------------------------------------------------------------------------------ 00:06:34.958 0,1 2528/s 104 MiB/s 0 0 00:06:34.958 0,0 2464/s 101 MiB/s 0 0 00:06:34.958 ==================================================================================== 00:06:34.958 Total 4992/s 529 MiB/s 0 0' 00:06:34.958 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.958 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.958 13:48:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.958 13:48:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:34.958 13:48:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.958 13:48:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.958 13:48:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.958 13:48:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.958 13:48:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.958 13:48:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.958 13:48:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.958 13:48:25 -- accel/accel.sh@42 -- # jq -r . 00:06:34.958 [2024-07-23 13:48:25.640621] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:34.958 [2024-07-23 13:48:25.640692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099676 ] 00:06:34.958 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.958 [2024-07-23 13:48:25.695980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.958 [2024-07-23 13:48:25.763099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.958 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.958 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.958 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.958 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.958 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.958 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=0x1 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=decompress 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=software 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=32 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=32 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=2 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val=Yes 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:34.959 13:48:25 -- accel/accel.sh@21 -- # val= 00:06:34.959 13:48:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # IFS=: 00:06:34.959 13:48:25 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@21 -- # val= 00:06:36.376 13:48:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # IFS=: 00:06:36.376 13:48:26 -- accel/accel.sh@20 -- # read -r var val 00:06:36.376 13:48:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:36.376 13:48:26 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:36.376 13:48:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:36.376 00:06:36.376 real 0m2.754s 00:06:36.376 user 0m2.541s 00:06:36.376 sys 0m0.218s 00:06:36.376 13:48:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.376 13:48:26 -- common/autotest_common.sh@10 -- # set +x 00:06:36.376 ************************************ 00:06:36.376 END TEST accel_deomp_full_mthread 00:06:36.376 ************************************ 00:06:36.376 13:48:27 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:36.376 13:48:27 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:36.376 13:48:27 -- accel/accel.sh@129 -- # build_accel_config 00:06:36.376 13:48:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:36.376 13:48:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.376 13:48:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:36.376 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:36.376 13:48:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:36.376 13:48:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:36.376 13:48:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:36.376 13:48:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:36.376 13:48:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:36.376 13:48:27 -- accel/accel.sh@42 -- # jq -r . 00:06:36.376 ************************************ 00:06:36.376 START TEST accel_dif_functional_tests 00:06:36.376 ************************************ 00:06:36.376 13:48:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:36.376 [2024-07-23 13:48:27.072291] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.376 [2024-07-23 13:48:27.072336] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3099929 ] 00:06:36.376 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.376 [2024-07-23 13:48:27.123489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.376 [2024-07-23 13:48:27.193916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.376 [2024-07-23 13:48:27.194014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.376 [2024-07-23 13:48:27.194015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.376 00:06:36.376 00:06:36.376 CUnit - A unit testing framework for C - Version 2.1-3 00:06:36.376 http://cunit.sourceforge.net/ 00:06:36.376 00:06:36.376 00:06:36.376 Suite: accel_dif 00:06:36.376 Test: verify: DIF generated, GUARD check ...passed 00:06:36.376 Test: verify: DIF generated, APPTAG check ...passed 00:06:36.376 Test: verify: DIF generated, REFTAG check ...passed 00:06:36.376 Test: verify: DIF not generated, GUARD check ...[2024-07-23 13:48:27.262179] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:36.376 [2024-07-23 13:48:27.262221] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:36.376 passed 00:06:36.376 Test: verify: DIF not generated, APPTAG check ...[2024-07-23 13:48:27.262253] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:36.376 [2024-07-23 13:48:27.262267] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:36.376 passed 00:06:36.376 Test: verify: DIF not generated, REFTAG check ...[2024-07-23 13:48:27.262283] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:36.376 [2024-07-23 13:48:27.262298] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:36.376 passed 00:06:36.376 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:36.376 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-23 13:48:27.262338] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:36.376 passed 00:06:36.376 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:36.376 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:36.376 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:36.376 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-23 13:48:27.262430] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:36.376 passed 00:06:36.376 Test: generate copy: DIF generated, GUARD check ...passed 00:06:36.376 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:36.376 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:36.376 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:36.376 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:36.376 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:36.377 Test: generate copy: iovecs-len validate ...[2024-07-23 13:48:27.262590] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:36.377 passed 00:06:36.377 Test: generate copy: buffer alignment validate ...passed 00:06:36.377 00:06:36.377 Run Summary: Type Total Ran Passed Failed Inactive 00:06:36.377 suites 1 1 n/a 0 0 00:06:36.377 tests 20 20 20 0 0 00:06:36.377 asserts 204 204 204 0 n/a 00:06:36.377 00:06:36.377 Elapsed time = 0.000 seconds 00:06:36.638 00:06:36.638 real 0m0.425s 00:06:36.638 user 0m0.642s 00:06:36.638 sys 0m0.139s 00:06:36.638 13:48:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.638 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:36.638 ************************************ 00:06:36.638 END TEST accel_dif_functional_tests 00:06:36.638 ************************************ 00:06:36.638 00:06:36.638 real 0m57.641s 00:06:36.638 user 1m6.357s 00:06:36.638 sys 0m5.911s 00:06:36.638 13:48:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.638 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:36.638 ************************************ 00:06:36.638 END TEST accel 00:06:36.638 ************************************ 00:06:36.638 13:48:27 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:36.638 13:48:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.638 13:48:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.638 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:36.638 ************************************ 00:06:36.638 START TEST accel_rpc 00:06:36.638 ************************************ 00:06:36.638 13:48:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:36.638 * Looking for test storage... 00:06:36.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:36.638 13:48:27 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:36.638 13:48:27 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3100181 00:06:36.638 13:48:27 -- accel/accel_rpc.sh@15 -- # waitforlisten 3100181 00:06:36.638 13:48:27 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:36.638 13:48:27 -- common/autotest_common.sh@819 -- # '[' -z 3100181 ']' 00:06:36.638 13:48:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.638 13:48:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.638 13:48:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.638 13:48:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.638 13:48:27 -- common/autotest_common.sh@10 -- # set +x 00:06:36.898 [2024-07-23 13:48:27.656132] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:36.898 [2024-07-23 13:48:27.656180] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100181 ] 00:06:36.898 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.899 [2024-07-23 13:48:27.709233] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.899 [2024-07-23 13:48:27.789172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.899 [2024-07-23 13:48:27.789279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.469 13:48:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.469 13:48:28 -- common/autotest_common.sh@852 -- # return 0 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:37.469 13:48:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.469 13:48:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.469 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.469 ************************************ 00:06:37.469 START TEST accel_assign_opcode 00:06:37.469 ************************************ 00:06:37.469 13:48:28 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:37.469 13:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.469 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.469 [2024-07-23 13:48:28.455222] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:37.469 13:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:37.469 13:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.469 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.469 [2024-07-23 13:48:28.463239] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:37.469 13:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.469 13:48:28 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:37.469 13:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.469 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.730 13:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.730 13:48:28 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:37.730 13:48:28 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:37.730 13:48:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:37.730 13:48:28 -- accel/accel_rpc.sh@42 -- # grep software 00:06:37.730 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.730 13:48:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:37.730 software 00:06:37.730 00:06:37.730 real 0m0.232s 00:06:37.730 user 0m0.044s 00:06:37.730 sys 0m0.011s 00:06:37.730 13:48:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.730 13:48:28 -- common/autotest_common.sh@10 -- # set +x 00:06:37.730 ************************************ 00:06:37.730 END TEST accel_assign_opcode 00:06:37.730 ************************************ 00:06:37.730 13:48:28 -- accel/accel_rpc.sh@55 -- # killprocess 3100181 00:06:37.730 13:48:28 -- common/autotest_common.sh@926 -- # '[' -z 3100181 ']' 00:06:37.730 13:48:28 -- common/autotest_common.sh@930 -- # kill -0 3100181 00:06:37.730 13:48:28 -- common/autotest_common.sh@931 -- # uname 00:06:37.730 13:48:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.730 13:48:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3100181 00:06:37.730 13:48:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.730 13:48:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.730 13:48:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3100181' 00:06:37.730 killing process with pid 3100181 00:06:37.730 13:48:28 -- common/autotest_common.sh@945 -- # kill 3100181 00:06:37.730 13:48:28 -- common/autotest_common.sh@950 -- # wait 3100181 00:06:38.300 00:06:38.300 real 0m1.551s 00:06:38.300 user 0m1.612s 00:06:38.300 sys 0m0.385s 00:06:38.300 13:48:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.300 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.300 ************************************ 00:06:38.300 END TEST accel_rpc 00:06:38.300 ************************************ 00:06:38.300 13:48:29 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.300 13:48:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:38.300 13:48:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.300 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.300 ************************************ 00:06:38.300 START TEST app_cmdline 00:06:38.300 ************************************ 00:06:38.300 13:48:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:38.300 * Looking for test storage... 00:06:38.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:38.300 13:48:29 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:38.300 13:48:29 -- app/cmdline.sh@17 -- # spdk_tgt_pid=3100516 00:06:38.300 13:48:29 -- app/cmdline.sh@18 -- # waitforlisten 3100516 00:06:38.300 13:48:29 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:38.300 13:48:29 -- common/autotest_common.sh@819 -- # '[' -z 3100516 ']' 00:06:38.300 13:48:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.300 13:48:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.300 13:48:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.300 13:48:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.300 13:48:29 -- common/autotest_common.sh@10 -- # set +x 00:06:38.300 [2024-07-23 13:48:29.243811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:38.300 [2024-07-23 13:48:29.243862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100516 ] 00:06:38.300 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.300 [2024-07-23 13:48:29.296850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.560 [2024-07-23 13:48:29.369965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.560 [2024-07-23 13:48:29.370086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.130 13:48:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.130 13:48:30 -- common/autotest_common.sh@852 -- # return 0 00:06:39.130 13:48:30 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:39.391 { 00:06:39.391 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:06:39.391 "fields": { 00:06:39.391 "major": 24, 00:06:39.391 "minor": 1, 00:06:39.391 "patch": 1, 00:06:39.391 "suffix": "-pre", 00:06:39.391 "commit": "dbef7efac" 00:06:39.391 } 00:06:39.391 } 00:06:39.391 13:48:30 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:39.391 13:48:30 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:39.391 13:48:30 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:39.391 13:48:30 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:39.391 13:48:30 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:39.391 13:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.391 13:48:30 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:39.391 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.391 13:48:30 -- app/cmdline.sh@26 -- # sort 00:06:39.391 13:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.391 13:48:30 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:39.391 13:48:30 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:39.391 13:48:30 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.391 13:48:30 -- common/autotest_common.sh@640 -- # local es=0 00:06:39.391 13:48:30 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.391 13:48:30 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.391 13:48:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.391 13:48:30 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.391 13:48:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.391 13:48:30 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.391 13:48:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.391 13:48:30 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:39.391 13:48:30 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:39.391 13:48:30 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:39.391 request: 00:06:39.391 { 00:06:39.391 "method": "env_dpdk_get_mem_stats", 00:06:39.391 "req_id": 1 00:06:39.391 } 00:06:39.391 Got JSON-RPC error response 00:06:39.391 response: 00:06:39.391 { 00:06:39.391 "code": -32601, 00:06:39.391 "message": "Method not found" 00:06:39.391 } 00:06:39.391 13:48:30 -- common/autotest_common.sh@643 -- # es=1 00:06:39.391 13:48:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.391 13:48:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.391 13:48:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.391 13:48:30 -- app/cmdline.sh@1 -- # killprocess 3100516 00:06:39.391 13:48:30 -- common/autotest_common.sh@926 -- # '[' -z 3100516 ']' 00:06:39.391 13:48:30 -- common/autotest_common.sh@930 -- # kill -0 3100516 00:06:39.391 13:48:30 -- common/autotest_common.sh@931 -- # uname 00:06:39.391 13:48:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.391 13:48:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3100516 00:06:39.651 13:48:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.651 13:48:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.651 13:48:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3100516' 00:06:39.651 killing process with pid 3100516 00:06:39.651 13:48:30 -- common/autotest_common.sh@945 -- # kill 3100516 00:06:39.651 13:48:30 -- common/autotest_common.sh@950 -- # wait 3100516 00:06:39.912 00:06:39.912 real 0m1.658s 00:06:39.912 user 0m1.981s 00:06:39.912 sys 0m0.393s 00:06:39.912 13:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.912 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.912 ************************************ 00:06:39.912 END TEST app_cmdline 00:06:39.912 ************************************ 00:06:39.912 13:48:30 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:39.912 13:48:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.912 13:48:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.912 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:39.912 ************************************ 00:06:39.912 START TEST version 00:06:39.912 ************************************ 00:06:39.912 13:48:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:39.912 * Looking for test storage... 00:06:39.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:39.912 13:48:30 -- app/version.sh@17 -- # get_header_version major 00:06:39.912 13:48:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.912 13:48:30 -- app/version.sh@14 -- # cut -f2 00:06:39.912 13:48:30 -- app/version.sh@14 -- # tr -d '"' 00:06:39.912 13:48:30 -- app/version.sh@17 -- # major=24 00:06:39.912 13:48:30 -- app/version.sh@18 -- # get_header_version minor 00:06:39.912 13:48:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.912 13:48:30 -- app/version.sh@14 -- # cut -f2 00:06:39.912 13:48:30 -- app/version.sh@14 -- # tr -d '"' 00:06:39.912 13:48:30 -- app/version.sh@18 -- # minor=1 00:06:39.912 13:48:30 -- app/version.sh@19 -- # get_header_version patch 00:06:39.912 13:48:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.912 13:48:30 -- app/version.sh@14 -- # cut -f2 00:06:39.912 13:48:30 -- app/version.sh@14 -- # tr -d '"' 00:06:39.912 13:48:30 -- app/version.sh@19 -- # patch=1 00:06:39.912 13:48:30 -- app/version.sh@20 -- # get_header_version suffix 00:06:39.912 13:48:30 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:39.912 13:48:30 -- app/version.sh@14 -- # cut -f2 00:06:39.912 13:48:30 -- app/version.sh@14 -- # tr -d '"' 00:06:39.912 13:48:30 -- app/version.sh@20 -- # suffix=-pre 00:06:39.912 13:48:30 -- app/version.sh@22 -- # version=24.1 00:06:39.912 13:48:30 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:39.912 13:48:30 -- app/version.sh@25 -- # version=24.1.1 00:06:39.912 13:48:30 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:39.912 13:48:30 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:39.912 13:48:30 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:40.172 13:48:30 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:40.172 13:48:30 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:40.172 00:06:40.172 real 0m0.143s 00:06:40.172 user 0m0.084s 00:06:40.172 sys 0m0.092s 00:06:40.172 13:48:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.172 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:40.172 ************************************ 00:06:40.172 END TEST version 00:06:40.172 ************************************ 00:06:40.172 13:48:30 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:40.172 13:48:30 -- spdk/autotest.sh@204 -- # uname -s 00:06:40.172 13:48:30 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:40.172 13:48:30 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:40.172 13:48:30 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:40.172 13:48:30 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:40.172 13:48:30 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:40.172 13:48:30 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:40.172 13:48:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:40.172 13:48:30 -- common/autotest_common.sh@10 -- # set +x 00:06:40.172 13:48:31 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:40.172 13:48:31 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:40.172 13:48:31 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:40.172 13:48:31 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:40.172 13:48:31 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:40.172 13:48:31 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:40.172 13:48:31 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:40.172 13:48:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:40.172 13:48:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.172 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.172 ************************************ 00:06:40.172 START TEST nvmf_tcp 00:06:40.172 ************************************ 00:06:40.172 13:48:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:40.172 * Looking for test storage... 00:06:40.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:40.172 13:48:31 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:40.172 13:48:31 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:40.172 13:48:31 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.172 13:48:31 -- nvmf/common.sh@7 -- # uname -s 00:06:40.172 13:48:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.172 13:48:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.173 13:48:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.173 13:48:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.173 13:48:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.173 13:48:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.173 13:48:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.173 13:48:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.173 13:48:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.173 13:48:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.173 13:48:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:40.173 13:48:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:40.173 13:48:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.173 13:48:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.173 13:48:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.173 13:48:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.173 13:48:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.173 13:48:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.173 13:48:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.173 13:48:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.173 13:48:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.173 13:48:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.173 13:48:31 -- paths/export.sh@5 -- # export PATH 00:06:40.173 13:48:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.173 13:48:31 -- nvmf/common.sh@46 -- # : 0 00:06:40.173 13:48:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:40.173 13:48:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:40.173 13:48:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:40.173 13:48:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.173 13:48:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.173 13:48:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:40.173 13:48:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:40.173 13:48:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:40.173 13:48:31 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:40.173 13:48:31 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:40.173 13:48:31 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:40.173 13:48:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:40.173 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 13:48:31 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:40.173 13:48:31 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:40.173 13:48:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:40.173 13:48:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.173 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.173 ************************************ 00:06:40.173 START TEST nvmf_example 00:06:40.173 ************************************ 00:06:40.173 13:48:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:40.433 * Looking for test storage... 00:06:40.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.433 13:48:31 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.433 13:48:31 -- nvmf/common.sh@7 -- # uname -s 00:06:40.433 13:48:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.433 13:48:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.433 13:48:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.434 13:48:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.434 13:48:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.434 13:48:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.434 13:48:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.434 13:48:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.434 13:48:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.434 13:48:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.434 13:48:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:40.434 13:48:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:40.434 13:48:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.434 13:48:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.434 13:48:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.434 13:48:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.434 13:48:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.434 13:48:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.434 13:48:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.434 13:48:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.434 13:48:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.434 13:48:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.434 13:48:31 -- paths/export.sh@5 -- # export PATH 00:06:40.434 13:48:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.434 13:48:31 -- nvmf/common.sh@46 -- # : 0 00:06:40.434 13:48:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:40.434 13:48:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:40.434 13:48:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:40.434 13:48:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.434 13:48:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.434 13:48:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:40.434 13:48:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:40.434 13:48:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:40.434 13:48:31 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:40.434 13:48:31 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:40.434 13:48:31 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:40.434 13:48:31 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:40.434 13:48:31 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:40.434 13:48:31 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:40.434 13:48:31 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:40.434 13:48:31 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:40.434 13:48:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:40.434 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:40.434 13:48:31 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:40.434 13:48:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:40.434 13:48:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.434 13:48:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:40.434 13:48:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:40.434 13:48:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:40.434 13:48:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.434 13:48:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:40.434 13:48:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.434 13:48:31 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:40.434 13:48:31 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:40.434 13:48:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:40.434 13:48:31 -- common/autotest_common.sh@10 -- # set +x 00:06:45.714 13:48:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:45.714 13:48:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:45.714 13:48:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:45.714 13:48:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:45.714 13:48:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:45.714 13:48:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:45.714 13:48:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:45.714 13:48:36 -- nvmf/common.sh@294 -- # net_devs=() 00:06:45.714 13:48:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:45.714 13:48:36 -- nvmf/common.sh@295 -- # e810=() 00:06:45.714 13:48:36 -- nvmf/common.sh@295 -- # local -ga e810 00:06:45.714 13:48:36 -- nvmf/common.sh@296 -- # x722=() 00:06:45.714 13:48:36 -- nvmf/common.sh@296 -- # local -ga x722 00:06:45.714 13:48:36 -- nvmf/common.sh@297 -- # mlx=() 00:06:45.715 13:48:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:45.715 13:48:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:45.715 13:48:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:45.715 13:48:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:45.715 13:48:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:45.715 13:48:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:45.715 13:48:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:45.715 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:45.715 13:48:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:45.715 13:48:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:45.715 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:45.715 13:48:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:45.715 13:48:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:45.715 13:48:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.715 13:48:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:45.715 13:48:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.715 13:48:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:45.715 Found net devices under 0000:86:00.0: cvl_0_0 00:06:45.715 13:48:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.715 13:48:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:45.715 13:48:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:45.715 13:48:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:45.715 13:48:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:45.715 13:48:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:45.715 Found net devices under 0000:86:00.1: cvl_0_1 00:06:45.715 13:48:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:45.715 13:48:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:45.715 13:48:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:45.715 13:48:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:45.715 13:48:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:45.715 13:48:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:45.715 13:48:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:45.715 13:48:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:45.715 13:48:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:45.715 13:48:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:45.715 13:48:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:45.715 13:48:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:45.715 13:48:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:45.715 13:48:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:45.715 13:48:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:45.715 13:48:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:45.715 13:48:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:45.715 13:48:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:45.715 13:48:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:45.715 13:48:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:45.715 13:48:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:45.715 13:48:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:45.715 13:48:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:45.715 13:48:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:45.715 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:45.715 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:06:45.715 00:06:45.715 --- 10.0.0.2 ping statistics --- 00:06:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.715 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:06:45.715 13:48:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:45.715 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:45.715 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.355 ms 00:06:45.715 00:06:45.715 --- 10.0.0.1 ping statistics --- 00:06:45.715 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:45.715 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:06:45.715 13:48:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:45.715 13:48:36 -- nvmf/common.sh@410 -- # return 0 00:06:45.715 13:48:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:45.715 13:48:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:45.715 13:48:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:45.715 13:48:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:45.715 13:48:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:45.715 13:48:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:45.715 13:48:36 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:45.715 13:48:36 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:45.715 13:48:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:45.715 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.715 13:48:36 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:45.715 13:48:36 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:45.715 13:48:36 -- target/nvmf_example.sh@34 -- # nvmfpid=3103918 00:06:45.715 13:48:36 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:45.715 13:48:36 -- target/nvmf_example.sh@36 -- # waitforlisten 3103918 00:06:45.715 13:48:36 -- common/autotest_common.sh@819 -- # '[' -z 3103918 ']' 00:06:45.715 13:48:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.715 13:48:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:45.715 13:48:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.715 13:48:36 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:45.715 13:48:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:45.715 13:48:36 -- common/autotest_common.sh@10 -- # set +x 00:06:45.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.284 13:48:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:46.284 13:48:37 -- common/autotest_common.sh@852 -- # return 0 00:06:46.284 13:48:37 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:46.284 13:48:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:46.284 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.284 13:48:37 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:46.284 13:48:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.284 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.285 13:48:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.285 13:48:37 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:46.285 13:48:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.285 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.285 13:48:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.285 13:48:37 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:46.285 13:48:37 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:46.285 13:48:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.285 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.285 13:48:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.285 13:48:37 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:46.285 13:48:37 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:46.285 13:48:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.285 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.544 13:48:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.544 13:48:37 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:46.544 13:48:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:46.544 13:48:37 -- common/autotest_common.sh@10 -- # set +x 00:06:46.544 13:48:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:46.544 13:48:37 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:46.544 13:48:37 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:46.544 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.526 Initializing NVMe Controllers 00:06:56.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:56.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:56.526 Initialization complete. Launching workers. 00:06:56.526 ======================================================== 00:06:56.526 Latency(us) 00:06:56.526 Device Information : IOPS MiB/s Average min max 00:06:56.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13840.84 54.07 4627.02 691.32 15492.80 00:06:56.526 ======================================================== 00:06:56.526 Total : 13840.84 54.07 4627.02 691.32 15492.80 00:06:56.526 00:06:56.526 13:48:47 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:56.526 13:48:47 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:56.526 13:48:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:56.526 13:48:47 -- nvmf/common.sh@116 -- # sync 00:06:56.526 13:48:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:56.526 13:48:47 -- nvmf/common.sh@119 -- # set +e 00:06:56.526 13:48:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:56.526 13:48:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:56.526 rmmod nvme_tcp 00:06:56.526 rmmod nvme_fabrics 00:06:56.526 rmmod nvme_keyring 00:06:56.526 13:48:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:56.526 13:48:47 -- nvmf/common.sh@123 -- # set -e 00:06:56.526 13:48:47 -- nvmf/common.sh@124 -- # return 0 00:06:56.526 13:48:47 -- nvmf/common.sh@477 -- # '[' -n 3103918 ']' 00:06:56.526 13:48:47 -- nvmf/common.sh@478 -- # killprocess 3103918 00:06:56.526 13:48:47 -- common/autotest_common.sh@926 -- # '[' -z 3103918 ']' 00:06:56.526 13:48:47 -- common/autotest_common.sh@930 -- # kill -0 3103918 00:06:56.526 13:48:47 -- common/autotest_common.sh@931 -- # uname 00:06:56.786 13:48:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:56.786 13:48:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3103918 00:06:56.786 13:48:47 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:56.786 13:48:47 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:56.786 13:48:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3103918' 00:06:56.786 killing process with pid 3103918 00:06:56.786 13:48:47 -- common/autotest_common.sh@945 -- # kill 3103918 00:06:56.786 13:48:47 -- common/autotest_common.sh@950 -- # wait 3103918 00:06:56.786 nvmf threads initialize successfully 00:06:56.786 bdev subsystem init successfully 00:06:56.786 created a nvmf target service 00:06:56.786 create targets's poll groups done 00:06:56.786 all subsystems of target started 00:06:56.786 nvmf target is running 00:06:56.786 all subsystems of target stopped 00:06:56.786 destroy targets's poll groups done 00:06:56.786 destroyed the nvmf target service 00:06:56.786 bdev subsystem finish successfully 00:06:56.786 nvmf threads destroy successfully 00:06:56.786 13:48:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:56.786 13:48:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:56.786 13:48:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:56.786 13:48:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:56.786 13:48:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:56.786 13:48:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.786 13:48:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.786 13:48:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.329 13:48:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:59.329 13:48:49 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:59.329 13:48:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:59.329 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.329 00:06:59.329 real 0m18.714s 00:06:59.329 user 0m45.406s 00:06:59.329 sys 0m5.223s 00:06:59.329 13:48:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.329 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.329 ************************************ 00:06:59.329 END TEST nvmf_example 00:06:59.329 ************************************ 00:06:59.329 13:48:49 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:59.329 13:48:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:59.329 13:48:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.329 13:48:49 -- common/autotest_common.sh@10 -- # set +x 00:06:59.329 ************************************ 00:06:59.329 START TEST nvmf_filesystem 00:06:59.329 ************************************ 00:06:59.329 13:48:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:59.329 * Looking for test storage... 00:06:59.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.329 13:48:49 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:59.329 13:48:49 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:59.329 13:48:49 -- common/autotest_common.sh@34 -- # set -e 00:06:59.329 13:48:49 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:59.329 13:48:49 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:59.329 13:48:49 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:59.329 13:48:49 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:59.329 13:48:49 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:59.329 13:48:49 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:59.329 13:48:49 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:59.329 13:48:50 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:59.329 13:48:50 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:59.329 13:48:50 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:59.329 13:48:50 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:59.329 13:48:50 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:59.329 13:48:50 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:59.329 13:48:50 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:59.329 13:48:50 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:59.329 13:48:50 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:59.329 13:48:50 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:59.329 13:48:50 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:59.329 13:48:50 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:59.329 13:48:50 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:59.329 13:48:50 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:59.329 13:48:50 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:59.329 13:48:50 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:59.329 13:48:50 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:59.329 13:48:50 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:59.329 13:48:50 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:59.329 13:48:50 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:59.329 13:48:50 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:59.329 13:48:50 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:59.329 13:48:50 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:59.329 13:48:50 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:59.329 13:48:50 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:59.329 13:48:50 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:59.329 13:48:50 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:59.329 13:48:50 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:59.329 13:48:50 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:59.329 13:48:50 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:59.329 13:48:50 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:59.329 13:48:50 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:59.329 13:48:50 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:59.329 13:48:50 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:59.329 13:48:50 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:59.329 13:48:50 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:59.329 13:48:50 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:59.329 13:48:50 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:59.329 13:48:50 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:59.329 13:48:50 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:59.329 13:48:50 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:59.329 13:48:50 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:59.329 13:48:50 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:59.329 13:48:50 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:59.329 13:48:50 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:59.330 13:48:50 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:59.330 13:48:50 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:59.330 13:48:50 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:59.330 13:48:50 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:59.330 13:48:50 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:59.330 13:48:50 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:59.330 13:48:50 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:59.330 13:48:50 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:59.330 13:48:50 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:59.330 13:48:50 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:59.330 13:48:50 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:59.330 13:48:50 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:59.330 13:48:50 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:59.330 13:48:50 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:59.330 13:48:50 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:59.330 13:48:50 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:59.330 13:48:50 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:59.330 13:48:50 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:59.330 13:48:50 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:59.330 13:48:50 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:59.330 13:48:50 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:59.330 13:48:50 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:59.330 13:48:50 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:59.330 13:48:50 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:59.330 13:48:50 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:59.330 13:48:50 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:59.330 13:48:50 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:59.330 13:48:50 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:59.330 13:48:50 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:59.330 13:48:50 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:59.330 13:48:50 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:59.330 13:48:50 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:59.330 13:48:50 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:59.330 13:48:50 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:59.330 13:48:50 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:59.330 13:48:50 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:59.330 13:48:50 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:59.330 13:48:50 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:59.330 13:48:50 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:59.330 13:48:50 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:59.330 13:48:50 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:59.330 13:48:50 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:59.330 13:48:50 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:59.330 13:48:50 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:59.330 13:48:50 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:59.330 13:48:50 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:59.330 13:48:50 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:59.330 #define SPDK_CONFIG_H 00:06:59.330 #define SPDK_CONFIG_APPS 1 00:06:59.330 #define SPDK_CONFIG_ARCH native 00:06:59.330 #undef SPDK_CONFIG_ASAN 00:06:59.330 #undef SPDK_CONFIG_AVAHI 00:06:59.330 #undef SPDK_CONFIG_CET 00:06:59.330 #define SPDK_CONFIG_COVERAGE 1 00:06:59.330 #define SPDK_CONFIG_CROSS_PREFIX 00:06:59.330 #undef SPDK_CONFIG_CRYPTO 00:06:59.330 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:59.330 #undef SPDK_CONFIG_CUSTOMOCF 00:06:59.330 #undef SPDK_CONFIG_DAOS 00:06:59.330 #define SPDK_CONFIG_DAOS_DIR 00:06:59.330 #define SPDK_CONFIG_DEBUG 1 00:06:59.330 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:59.330 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:59.330 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:59.330 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:59.330 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:59.330 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:59.330 #define SPDK_CONFIG_EXAMPLES 1 00:06:59.330 #undef SPDK_CONFIG_FC 00:06:59.330 #define SPDK_CONFIG_FC_PATH 00:06:59.330 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:59.330 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:59.330 #undef SPDK_CONFIG_FUSE 00:06:59.330 #undef SPDK_CONFIG_FUZZER 00:06:59.330 #define SPDK_CONFIG_FUZZER_LIB 00:06:59.330 #undef SPDK_CONFIG_GOLANG 00:06:59.330 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:59.330 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:59.330 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:59.330 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:59.330 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:59.330 #define SPDK_CONFIG_IDXD 1 00:06:59.330 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:59.330 #undef SPDK_CONFIG_IPSEC_MB 00:06:59.330 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:59.330 #define SPDK_CONFIG_ISAL 1 00:06:59.330 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:59.330 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:59.330 #define SPDK_CONFIG_LIBDIR 00:06:59.330 #undef SPDK_CONFIG_LTO 00:06:59.330 #define SPDK_CONFIG_MAX_LCORES 00:06:59.330 #define SPDK_CONFIG_NVME_CUSE 1 00:06:59.330 #undef SPDK_CONFIG_OCF 00:06:59.330 #define SPDK_CONFIG_OCF_PATH 00:06:59.330 #define SPDK_CONFIG_OPENSSL_PATH 00:06:59.330 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:59.330 #undef SPDK_CONFIG_PGO_USE 00:06:59.330 #define SPDK_CONFIG_PREFIX /usr/local 00:06:59.330 #undef SPDK_CONFIG_RAID5F 00:06:59.330 #undef SPDK_CONFIG_RBD 00:06:59.330 #define SPDK_CONFIG_RDMA 1 00:06:59.330 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:59.330 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:59.330 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:59.330 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:59.330 #define SPDK_CONFIG_SHARED 1 00:06:59.330 #undef SPDK_CONFIG_SMA 00:06:59.330 #define SPDK_CONFIG_TESTS 1 00:06:59.330 #undef SPDK_CONFIG_TSAN 00:06:59.330 #define SPDK_CONFIG_UBLK 1 00:06:59.330 #define SPDK_CONFIG_UBSAN 1 00:06:59.330 #undef SPDK_CONFIG_UNIT_TESTS 00:06:59.330 #undef SPDK_CONFIG_URING 00:06:59.330 #define SPDK_CONFIG_URING_PATH 00:06:59.330 #undef SPDK_CONFIG_URING_ZNS 00:06:59.330 #undef SPDK_CONFIG_USDT 00:06:59.330 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:59.330 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:59.330 #undef SPDK_CONFIG_VFIO_USER 00:06:59.330 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:59.330 #define SPDK_CONFIG_VHOST 1 00:06:59.330 #define SPDK_CONFIG_VIRTIO 1 00:06:59.330 #undef SPDK_CONFIG_VTUNE 00:06:59.330 #define SPDK_CONFIG_VTUNE_DIR 00:06:59.330 #define SPDK_CONFIG_WERROR 1 00:06:59.330 #define SPDK_CONFIG_WPDK_DIR 00:06:59.330 #undef SPDK_CONFIG_XNVME 00:06:59.330 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:59.330 13:48:50 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:59.330 13:48:50 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.330 13:48:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.330 13:48:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.330 13:48:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.330 13:48:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.330 13:48:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.330 13:48:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.330 13:48:50 -- paths/export.sh@5 -- # export PATH 00:06:59.330 13:48:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.331 13:48:50 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:59.331 13:48:50 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:59.331 13:48:50 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:59.331 13:48:50 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:59.331 13:48:50 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:59.331 13:48:50 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:59.331 13:48:50 -- pm/common@16 -- # TEST_TAG=N/A 00:06:59.331 13:48:50 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:59.331 13:48:50 -- common/autotest_common.sh@52 -- # : 1 00:06:59.331 13:48:50 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:59.331 13:48:50 -- common/autotest_common.sh@56 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:59.331 13:48:50 -- common/autotest_common.sh@58 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:59.331 13:48:50 -- common/autotest_common.sh@60 -- # : 1 00:06:59.331 13:48:50 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:59.331 13:48:50 -- common/autotest_common.sh@62 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:59.331 13:48:50 -- common/autotest_common.sh@64 -- # : 00:06:59.331 13:48:50 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:59.331 13:48:50 -- common/autotest_common.sh@66 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:59.331 13:48:50 -- common/autotest_common.sh@68 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:59.331 13:48:50 -- common/autotest_common.sh@70 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:59.331 13:48:50 -- common/autotest_common.sh@72 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:59.331 13:48:50 -- common/autotest_common.sh@74 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:59.331 13:48:50 -- common/autotest_common.sh@76 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:59.331 13:48:50 -- common/autotest_common.sh@78 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:59.331 13:48:50 -- common/autotest_common.sh@80 -- # : 1 00:06:59.331 13:48:50 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:59.331 13:48:50 -- common/autotest_common.sh@82 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:59.331 13:48:50 -- common/autotest_common.sh@84 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:59.331 13:48:50 -- common/autotest_common.sh@86 -- # : 1 00:06:59.331 13:48:50 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:59.331 13:48:50 -- common/autotest_common.sh@88 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:59.331 13:48:50 -- common/autotest_common.sh@90 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:59.331 13:48:50 -- common/autotest_common.sh@92 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:59.331 13:48:50 -- common/autotest_common.sh@94 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:59.331 13:48:50 -- common/autotest_common.sh@96 -- # : tcp 00:06:59.331 13:48:50 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:59.331 13:48:50 -- common/autotest_common.sh@98 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:59.331 13:48:50 -- common/autotest_common.sh@100 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:59.331 13:48:50 -- common/autotest_common.sh@102 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:59.331 13:48:50 -- common/autotest_common.sh@104 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:59.331 13:48:50 -- common/autotest_common.sh@106 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:59.331 13:48:50 -- common/autotest_common.sh@108 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:59.331 13:48:50 -- common/autotest_common.sh@110 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:59.331 13:48:50 -- common/autotest_common.sh@112 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:59.331 13:48:50 -- common/autotest_common.sh@114 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:59.331 13:48:50 -- common/autotest_common.sh@116 -- # : 1 00:06:59.331 13:48:50 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:59.331 13:48:50 -- common/autotest_common.sh@118 -- # : 00:06:59.331 13:48:50 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:59.331 13:48:50 -- common/autotest_common.sh@120 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:59.331 13:48:50 -- common/autotest_common.sh@122 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:59.331 13:48:50 -- common/autotest_common.sh@124 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:59.331 13:48:50 -- common/autotest_common.sh@126 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:59.331 13:48:50 -- common/autotest_common.sh@128 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:59.331 13:48:50 -- common/autotest_common.sh@130 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:59.331 13:48:50 -- common/autotest_common.sh@132 -- # : 00:06:59.331 13:48:50 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:59.331 13:48:50 -- common/autotest_common.sh@134 -- # : true 00:06:59.331 13:48:50 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:59.331 13:48:50 -- common/autotest_common.sh@136 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:59.331 13:48:50 -- common/autotest_common.sh@138 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:59.331 13:48:50 -- common/autotest_common.sh@140 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:59.331 13:48:50 -- common/autotest_common.sh@142 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:59.331 13:48:50 -- common/autotest_common.sh@144 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:59.331 13:48:50 -- common/autotest_common.sh@146 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:59.331 13:48:50 -- common/autotest_common.sh@148 -- # : e810 00:06:59.331 13:48:50 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:59.331 13:48:50 -- common/autotest_common.sh@150 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:59.331 13:48:50 -- common/autotest_common.sh@152 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:59.331 13:48:50 -- common/autotest_common.sh@154 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:59.331 13:48:50 -- common/autotest_common.sh@156 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:59.331 13:48:50 -- common/autotest_common.sh@158 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:59.331 13:48:50 -- common/autotest_common.sh@160 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:59.331 13:48:50 -- common/autotest_common.sh@163 -- # : 00:06:59.331 13:48:50 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:59.331 13:48:50 -- common/autotest_common.sh@165 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:59.331 13:48:50 -- common/autotest_common.sh@167 -- # : 0 00:06:59.331 13:48:50 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:59.331 13:48:50 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:59.331 13:48:50 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:59.331 13:48:50 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:59.331 13:48:50 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:59.331 13:48:50 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:59.331 13:48:50 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:59.332 13:48:50 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:59.332 13:48:50 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:59.332 13:48:50 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:59.332 13:48:50 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:59.332 13:48:50 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:59.332 13:48:50 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:59.332 13:48:50 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:59.332 13:48:50 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:59.332 13:48:50 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:59.332 13:48:50 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:59.332 13:48:50 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:59.332 13:48:50 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:59.332 13:48:50 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:59.332 13:48:50 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:59.332 13:48:50 -- common/autotest_common.sh@196 -- # cat 00:06:59.332 13:48:50 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:59.332 13:48:50 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:59.332 13:48:50 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:59.332 13:48:50 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:59.332 13:48:50 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:59.332 13:48:50 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:59.332 13:48:50 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:59.332 13:48:50 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:59.332 13:48:50 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:59.332 13:48:50 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:59.332 13:48:50 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:59.332 13:48:50 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:59.332 13:48:50 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:59.332 13:48:50 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:59.332 13:48:50 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:59.332 13:48:50 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:59.332 13:48:50 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:59.332 13:48:50 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:59.332 13:48:50 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:59.332 13:48:50 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:59.332 13:48:50 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:59.332 13:48:50 -- common/autotest_common.sh@249 -- # valgrind= 00:06:59.332 13:48:50 -- common/autotest_common.sh@255 -- # uname -s 00:06:59.332 13:48:50 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:59.332 13:48:50 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:59.332 13:48:50 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:59.332 13:48:50 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:59.332 13:48:50 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:59.332 13:48:50 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:59.332 13:48:50 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:59.332 13:48:50 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j96 00:06:59.332 13:48:50 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:59.332 13:48:50 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:59.332 13:48:50 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:59.332 13:48:50 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:59.332 13:48:50 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:59.332 13:48:50 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:59.332 13:48:50 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:59.332 13:48:50 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:59.332 13:48:50 -- common/autotest_common.sh@309 -- # [[ -z 3106367 ]] 00:06:59.332 13:48:50 -- common/autotest_common.sh@309 -- # kill -0 3106367 00:06:59.332 13:48:50 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:59.332 13:48:50 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:59.332 13:48:50 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:59.332 13:48:50 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:59.332 13:48:50 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:59.332 13:48:50 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:59.332 13:48:50 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:59.332 13:48:50 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:59.332 13:48:50 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.J24x5B 00:06:59.332 13:48:50 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:59.332 13:48:50 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:59.332 13:48:50 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:59.332 13:48:50 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.J24x5B/tests/target /tmp/spdk.J24x5B 00:06:59.332 13:48:50 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:59.332 13:48:50 -- common/autotest_common.sh@318 -- # df -T 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=950202368 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=4334227456 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=185301790720 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=195974283264 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=10672492544 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=97933623296 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987141632 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=39185477632 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=39194857472 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=9379840 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=97984995328 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=97987141632 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=2146304 00:06:59.332 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # avails["$mount"]=19597422592 00:06:59.332 13:48:50 -- common/autotest_common.sh@353 -- # sizes["$mount"]=19597426688 00:06:59.332 13:48:50 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:59.333 13:48:50 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:59.333 13:48:50 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:59.333 * Looking for test storage... 00:06:59.333 13:48:50 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:59.333 13:48:50 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:59.333 13:48:50 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.333 13:48:50 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:59.333 13:48:50 -- common/autotest_common.sh@363 -- # mount=/ 00:06:59.333 13:48:50 -- common/autotest_common.sh@365 -- # target_space=185301790720 00:06:59.333 13:48:50 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:59.333 13:48:50 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:59.333 13:48:50 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:59.333 13:48:50 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:59.333 13:48:50 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:59.333 13:48:50 -- common/autotest_common.sh@372 -- # new_size=12887085056 00:06:59.333 13:48:50 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:59.333 13:48:50 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.333 13:48:50 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.333 13:48:50 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.333 13:48:50 -- common/autotest_common.sh@380 -- # return 0 00:06:59.333 13:48:50 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:59.333 13:48:50 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:59.333 13:48:50 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:59.333 13:48:50 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:59.333 13:48:50 -- common/autotest_common.sh@1672 -- # true 00:06:59.333 13:48:50 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:59.333 13:48:50 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:59.333 13:48:50 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:59.333 13:48:50 -- common/autotest_common.sh@27 -- # exec 00:06:59.333 13:48:50 -- common/autotest_common.sh@29 -- # exec 00:06:59.333 13:48:50 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:59.333 13:48:50 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:59.333 13:48:50 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:59.333 13:48:50 -- common/autotest_common.sh@18 -- # set -x 00:06:59.333 13:48:50 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.333 13:48:50 -- nvmf/common.sh@7 -- # uname -s 00:06:59.333 13:48:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.333 13:48:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.333 13:48:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.333 13:48:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.333 13:48:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.333 13:48:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.333 13:48:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.333 13:48:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.333 13:48:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.333 13:48:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.333 13:48:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:59.333 13:48:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:59.333 13:48:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.333 13:48:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.333 13:48:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.333 13:48:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.333 13:48:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.333 13:48:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.333 13:48:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.333 13:48:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.333 13:48:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.333 13:48:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.333 13:48:50 -- paths/export.sh@5 -- # export PATH 00:06:59.333 13:48:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.333 13:48:50 -- nvmf/common.sh@46 -- # : 0 00:06:59.333 13:48:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:59.333 13:48:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:59.333 13:48:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:59.333 13:48:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.333 13:48:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.333 13:48:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:59.333 13:48:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:59.333 13:48:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:59.333 13:48:50 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:59.333 13:48:50 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:59.333 13:48:50 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:59.333 13:48:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:59.333 13:48:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.333 13:48:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:59.333 13:48:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:59.333 13:48:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:59.333 13:48:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.333 13:48:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.333 13:48:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.333 13:48:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:59.333 13:48:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:59.333 13:48:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:59.333 13:48:50 -- common/autotest_common.sh@10 -- # set +x 00:07:04.611 13:48:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:04.611 13:48:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:04.611 13:48:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:04.611 13:48:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:04.611 13:48:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:04.611 13:48:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:04.611 13:48:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:04.611 13:48:55 -- nvmf/common.sh@294 -- # net_devs=() 00:07:04.611 13:48:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:04.611 13:48:55 -- nvmf/common.sh@295 -- # e810=() 00:07:04.611 13:48:55 -- nvmf/common.sh@295 -- # local -ga e810 00:07:04.611 13:48:55 -- nvmf/common.sh@296 -- # x722=() 00:07:04.611 13:48:55 -- nvmf/common.sh@296 -- # local -ga x722 00:07:04.611 13:48:55 -- nvmf/common.sh@297 -- # mlx=() 00:07:04.611 13:48:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:04.611 13:48:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.611 13:48:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:04.611 13:48:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:04.611 13:48:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:04.611 13:48:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:04.611 13:48:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:04.611 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:04.611 13:48:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:04.611 13:48:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:04.612 13:48:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:04.612 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:04.612 13:48:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:04.612 13:48:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:04.612 13:48:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.612 13:48:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:04.612 13:48:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.612 13:48:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:04.612 Found net devices under 0000:86:00.0: cvl_0_0 00:07:04.612 13:48:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.612 13:48:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:04.612 13:48:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.612 13:48:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:04.612 13:48:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.612 13:48:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:04.612 Found net devices under 0000:86:00.1: cvl_0_1 00:07:04.612 13:48:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.612 13:48:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:04.612 13:48:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:04.612 13:48:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:04.612 13:48:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:04.612 13:48:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.612 13:48:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.612 13:48:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.612 13:48:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:04.612 13:48:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.612 13:48:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.612 13:48:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:04.612 13:48:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.612 13:48:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.612 13:48:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:04.612 13:48:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:04.612 13:48:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.612 13:48:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.612 13:48:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.612 13:48:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.612 13:48:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:04.612 13:48:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.873 13:48:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.873 13:48:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.873 13:48:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:04.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:07:04.873 00:07:04.873 --- 10.0.0.2 ping statistics --- 00:07:04.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.873 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:07:04.873 13:48:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.247 ms 00:07:04.873 00:07:04.873 --- 10.0.0.1 ping statistics --- 00:07:04.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.873 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:07:04.873 13:48:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.873 13:48:55 -- nvmf/common.sh@410 -- # return 0 00:07:04.873 13:48:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:04.873 13:48:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.873 13:48:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:04.873 13:48:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:04.873 13:48:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.873 13:48:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:04.873 13:48:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:04.873 13:48:55 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:04.873 13:48:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:04.873 13:48:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.873 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 ************************************ 00:07:04.873 START TEST nvmf_filesystem_no_in_capsule 00:07:04.873 ************************************ 00:07:04.873 13:48:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:07:04.873 13:48:55 -- target/filesystem.sh@47 -- # in_capsule=0 00:07:04.873 13:48:55 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:04.873 13:48:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:04.873 13:48:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:04.873 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 13:48:55 -- nvmf/common.sh@469 -- # nvmfpid=3109403 00:07:04.873 13:48:55 -- nvmf/common.sh@470 -- # waitforlisten 3109403 00:07:04.873 13:48:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.873 13:48:55 -- common/autotest_common.sh@819 -- # '[' -z 3109403 ']' 00:07:04.873 13:48:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.873 13:48:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.873 13:48:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.873 13:48:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.873 13:48:55 -- common/autotest_common.sh@10 -- # set +x 00:07:04.873 [2024-07-23 13:48:55.772229] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:04.873 [2024-07-23 13:48:55.772273] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.873 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.873 [2024-07-23 13:48:55.831932] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:05.133 [2024-07-23 13:48:55.912267] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:05.133 [2024-07-23 13:48:55.912376] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:05.133 [2024-07-23 13:48:55.912385] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:05.133 [2024-07-23 13:48:55.912391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:05.133 [2024-07-23 13:48:55.912432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.133 [2024-07-23 13:48:55.912530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.133 [2024-07-23 13:48:55.912617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.133 [2024-07-23 13:48:55.912618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.704 13:48:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.704 13:48:56 -- common/autotest_common.sh@852 -- # return 0 00:07:05.704 13:48:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:05.704 13:48:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:05.704 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.704 13:48:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.704 13:48:56 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:05.704 13:48:56 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:05.704 13:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.704 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.704 [2024-07-23 13:48:56.610308] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.704 13:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.704 13:48:56 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:05.704 13:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.704 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.965 Malloc1 00:07:05.965 13:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.965 13:48:56 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:05.965 13:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.965 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.965 13:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.965 13:48:56 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:05.965 13:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.965 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.965 13:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.965 13:48:56 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.965 13:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.965 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.965 [2024-07-23 13:48:56.761045] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.965 13:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.965 13:48:56 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:05.965 13:48:56 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:05.965 13:48:56 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:05.965 13:48:56 -- common/autotest_common.sh@1359 -- # local bs 00:07:05.965 13:48:56 -- common/autotest_common.sh@1360 -- # local nb 00:07:05.965 13:48:56 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:05.965 13:48:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.965 13:48:56 -- common/autotest_common.sh@10 -- # set +x 00:07:05.965 13:48:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.965 13:48:56 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:05.965 { 00:07:05.965 "name": "Malloc1", 00:07:05.965 "aliases": [ 00:07:05.965 "ebc8e944-ef5a-4fd5-8c31-70743fbf2f16" 00:07:05.965 ], 00:07:05.965 "product_name": "Malloc disk", 00:07:05.965 "block_size": 512, 00:07:05.965 "num_blocks": 1048576, 00:07:05.965 "uuid": "ebc8e944-ef5a-4fd5-8c31-70743fbf2f16", 00:07:05.965 "assigned_rate_limits": { 00:07:05.965 "rw_ios_per_sec": 0, 00:07:05.965 "rw_mbytes_per_sec": 0, 00:07:05.965 "r_mbytes_per_sec": 0, 00:07:05.965 "w_mbytes_per_sec": 0 00:07:05.965 }, 00:07:05.965 "claimed": true, 00:07:05.965 "claim_type": "exclusive_write", 00:07:05.965 "zoned": false, 00:07:05.965 "supported_io_types": { 00:07:05.965 "read": true, 00:07:05.965 "write": true, 00:07:05.965 "unmap": true, 00:07:05.965 "write_zeroes": true, 00:07:05.965 "flush": true, 00:07:05.965 "reset": true, 00:07:05.965 "compare": false, 00:07:05.965 "compare_and_write": false, 00:07:05.965 "abort": true, 00:07:05.965 "nvme_admin": false, 00:07:05.965 "nvme_io": false 00:07:05.965 }, 00:07:05.965 "memory_domains": [ 00:07:05.965 { 00:07:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:05.965 "dma_device_type": 2 00:07:05.965 } 00:07:05.965 ], 00:07:05.965 "driver_specific": {} 00:07:05.965 } 00:07:05.965 ]' 00:07:05.965 13:48:56 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:05.965 13:48:56 -- common/autotest_common.sh@1362 -- # bs=512 00:07:05.965 13:48:56 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:05.965 13:48:56 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:05.965 13:48:56 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:05.965 13:48:56 -- common/autotest_common.sh@1367 -- # echo 512 00:07:05.965 13:48:56 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:05.965 13:48:56 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:07.372 13:48:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:07.372 13:48:58 -- common/autotest_common.sh@1177 -- # local i=0 00:07:07.372 13:48:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:07.372 13:48:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:07.372 13:48:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:09.277 13:49:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:09.277 13:49:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:09.277 13:49:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:09.277 13:49:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:09.277 13:49:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:09.277 13:49:00 -- common/autotest_common.sh@1187 -- # return 0 00:07:09.277 13:49:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:09.277 13:49:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:09.277 13:49:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:09.277 13:49:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:09.277 13:49:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:09.277 13:49:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:09.277 13:49:00 -- setup/common.sh@80 -- # echo 536870912 00:07:09.277 13:49:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:09.277 13:49:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:09.277 13:49:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:09.277 13:49:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:09.536 13:49:00 -- target/filesystem.sh@69 -- # partprobe 00:07:10.103 13:49:00 -- target/filesystem.sh@70 -- # sleep 1 00:07:11.041 13:49:01 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:11.041 13:49:01 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:11.041 13:49:01 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:11.041 13:49:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.041 13:49:01 -- common/autotest_common.sh@10 -- # set +x 00:07:11.041 ************************************ 00:07:11.041 START TEST filesystem_ext4 00:07:11.041 ************************************ 00:07:11.041 13:49:01 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:11.041 13:49:01 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:11.041 13:49:01 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:11.041 13:49:01 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:11.041 13:49:01 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:11.041 13:49:01 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:11.041 13:49:01 -- common/autotest_common.sh@904 -- # local i=0 00:07:11.041 13:49:01 -- common/autotest_common.sh@905 -- # local force 00:07:11.041 13:49:01 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:11.041 13:49:01 -- common/autotest_common.sh@908 -- # force=-F 00:07:11.041 13:49:01 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:11.041 mke2fs 1.46.5 (30-Dec-2021) 00:07:11.041 Discarding device blocks: 0/522240 done 00:07:11.041 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:11.041 Filesystem UUID: 1d8ee8d6-4619-4dcd-bddb-5e22c1711615 00:07:11.041 Superblock backups stored on blocks: 00:07:11.041 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:11.041 00:07:11.041 Allocating group tables: 0/64 done 00:07:11.041 Writing inode tables: 0/64 done 00:07:11.300 Creating journal (8192 blocks): done 00:07:12.386 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:07:12.386 00:07:12.386 13:49:03 -- common/autotest_common.sh@921 -- # return 0 00:07:12.386 13:49:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:12.386 13:49:03 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:12.644 13:49:03 -- target/filesystem.sh@25 -- # sync 00:07:12.645 13:49:03 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:12.645 13:49:03 -- target/filesystem.sh@27 -- # sync 00:07:12.645 13:49:03 -- target/filesystem.sh@29 -- # i=0 00:07:12.645 13:49:03 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:12.645 13:49:03 -- target/filesystem.sh@37 -- # kill -0 3109403 00:07:12.645 13:49:03 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:12.645 13:49:03 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:12.645 13:49:03 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:12.645 13:49:03 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:12.645 00:07:12.645 real 0m1.612s 00:07:12.645 user 0m0.022s 00:07:12.645 sys 0m0.045s 00:07:12.645 13:49:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.645 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:07:12.645 ************************************ 00:07:12.645 END TEST filesystem_ext4 00:07:12.645 ************************************ 00:07:12.645 13:49:03 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:12.645 13:49:03 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:12.645 13:49:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.645 13:49:03 -- common/autotest_common.sh@10 -- # set +x 00:07:12.645 ************************************ 00:07:12.645 START TEST filesystem_btrfs 00:07:12.645 ************************************ 00:07:12.645 13:49:03 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:12.645 13:49:03 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:12.645 13:49:03 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:12.645 13:49:03 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:12.645 13:49:03 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:12.645 13:49:03 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:12.645 13:49:03 -- common/autotest_common.sh@904 -- # local i=0 00:07:12.645 13:49:03 -- common/autotest_common.sh@905 -- # local force 00:07:12.645 13:49:03 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:12.645 13:49:03 -- common/autotest_common.sh@910 -- # force=-f 00:07:12.645 13:49:03 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:12.903 btrfs-progs v6.6.2 00:07:12.903 See https://btrfs.readthedocs.io for more information. 00:07:12.903 00:07:12.903 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:12.903 NOTE: several default settings have changed in version 5.15, please make sure 00:07:12.903 this does not affect your deployments: 00:07:12.903 - DUP for metadata (-m dup) 00:07:12.903 - enabled no-holes (-O no-holes) 00:07:12.903 - enabled free-space-tree (-R free-space-tree) 00:07:12.903 00:07:12.903 Label: (null) 00:07:12.903 UUID: 6a1da59e-5706-4eb1-8fa6-b0eb230bca91 00:07:12.903 Node size: 16384 00:07:12.903 Sector size: 4096 00:07:12.903 Filesystem size: 510.00MiB 00:07:12.903 Block group profiles: 00:07:12.903 Data: single 8.00MiB 00:07:12.903 Metadata: DUP 32.00MiB 00:07:12.903 System: DUP 8.00MiB 00:07:12.903 SSD detected: yes 00:07:12.903 Zoned device: no 00:07:12.903 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:12.903 Runtime features: free-space-tree 00:07:12.903 Checksum: crc32c 00:07:12.903 Number of devices: 1 00:07:12.903 Devices: 00:07:12.903 ID SIZE PATH 00:07:12.903 1 510.00MiB /dev/nvme0n1p1 00:07:12.903 00:07:12.903 13:49:03 -- common/autotest_common.sh@921 -- # return 0 00:07:12.903 13:49:03 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:13.471 13:49:04 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:13.471 13:49:04 -- target/filesystem.sh@25 -- # sync 00:07:13.471 13:49:04 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:13.471 13:49:04 -- target/filesystem.sh@27 -- # sync 00:07:13.471 13:49:04 -- target/filesystem.sh@29 -- # i=0 00:07:13.471 13:49:04 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:13.471 13:49:04 -- target/filesystem.sh@37 -- # kill -0 3109403 00:07:13.471 13:49:04 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:13.471 13:49:04 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:13.471 13:49:04 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:13.471 13:49:04 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:13.471 00:07:13.471 real 0m0.734s 00:07:13.471 user 0m0.021s 00:07:13.471 sys 0m0.059s 00:07:13.471 13:49:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.471 13:49:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.471 ************************************ 00:07:13.471 END TEST filesystem_btrfs 00:07:13.471 ************************************ 00:07:13.471 13:49:04 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:13.471 13:49:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:13.471 13:49:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:13.471 13:49:04 -- common/autotest_common.sh@10 -- # set +x 00:07:13.471 ************************************ 00:07:13.471 START TEST filesystem_xfs 00:07:13.471 ************************************ 00:07:13.471 13:49:04 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:13.471 13:49:04 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:13.471 13:49:04 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:13.471 13:49:04 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:13.471 13:49:04 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:13.471 13:49:04 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:13.471 13:49:04 -- common/autotest_common.sh@904 -- # local i=0 00:07:13.471 13:49:04 -- common/autotest_common.sh@905 -- # local force 00:07:13.471 13:49:04 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:13.471 13:49:04 -- common/autotest_common.sh@910 -- # force=-f 00:07:13.471 13:49:04 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:13.471 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:13.471 = sectsz=512 attr=2, projid32bit=1 00:07:13.471 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:13.471 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:13.471 data = bsize=4096 blocks=130560, imaxpct=25 00:07:13.471 = sunit=0 swidth=0 blks 00:07:13.471 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:13.471 log =internal log bsize=4096 blocks=16384, version=2 00:07:13.471 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:13.471 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:14.407 Discarding blocks...Done. 00:07:14.407 13:49:05 -- common/autotest_common.sh@921 -- # return 0 00:07:14.407 13:49:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:16.941 13:49:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:16.941 13:49:07 -- target/filesystem.sh@25 -- # sync 00:07:16.941 13:49:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:16.941 13:49:07 -- target/filesystem.sh@27 -- # sync 00:07:16.941 13:49:07 -- target/filesystem.sh@29 -- # i=0 00:07:16.941 13:49:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:16.941 13:49:07 -- target/filesystem.sh@37 -- # kill -0 3109403 00:07:16.941 13:49:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:16.941 13:49:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:16.941 13:49:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:16.941 13:49:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:16.941 00:07:16.941 real 0m3.478s 00:07:16.941 user 0m0.023s 00:07:16.941 sys 0m0.049s 00:07:16.941 13:49:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.941 13:49:07 -- common/autotest_common.sh@10 -- # set +x 00:07:16.941 ************************************ 00:07:16.941 END TEST filesystem_xfs 00:07:16.941 ************************************ 00:07:16.941 13:49:07 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:17.200 13:49:08 -- target/filesystem.sh@93 -- # sync 00:07:17.200 13:49:08 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:17.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.460 13:49:08 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:17.460 13:49:08 -- common/autotest_common.sh@1198 -- # local i=0 00:07:17.460 13:49:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:17.460 13:49:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.460 13:49:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:17.460 13:49:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:17.460 13:49:08 -- common/autotest_common.sh@1210 -- # return 0 00:07:17.460 13:49:08 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:17.460 13:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:17.460 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.460 13:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:17.460 13:49:08 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:17.460 13:49:08 -- target/filesystem.sh@101 -- # killprocess 3109403 00:07:17.460 13:49:08 -- common/autotest_common.sh@926 -- # '[' -z 3109403 ']' 00:07:17.460 13:49:08 -- common/autotest_common.sh@930 -- # kill -0 3109403 00:07:17.460 13:49:08 -- common/autotest_common.sh@931 -- # uname 00:07:17.460 13:49:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:17.460 13:49:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3109403 00:07:17.460 13:49:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:17.460 13:49:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:17.460 13:49:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3109403' 00:07:17.460 killing process with pid 3109403 00:07:17.460 13:49:08 -- common/autotest_common.sh@945 -- # kill 3109403 00:07:17.460 13:49:08 -- common/autotest_common.sh@950 -- # wait 3109403 00:07:17.719 13:49:08 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:17.719 00:07:17.719 real 0m12.984s 00:07:17.719 user 0m50.876s 00:07:17.719 sys 0m1.070s 00:07:17.719 13:49:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.719 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.719 ************************************ 00:07:17.719 END TEST nvmf_filesystem_no_in_capsule 00:07:17.719 ************************************ 00:07:17.979 13:49:08 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:17.979 13:49:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:17.979 13:49:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.979 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.979 ************************************ 00:07:17.979 START TEST nvmf_filesystem_in_capsule 00:07:17.979 ************************************ 00:07:17.979 13:49:08 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:07:17.979 13:49:08 -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:17.979 13:49:08 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:17.979 13:49:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:17.979 13:49:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:17.979 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.979 13:49:08 -- nvmf/common.sh@469 -- # nvmfpid=3111737 00:07:17.979 13:49:08 -- nvmf/common.sh@470 -- # waitforlisten 3111737 00:07:17.979 13:49:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:17.979 13:49:08 -- common/autotest_common.sh@819 -- # '[' -z 3111737 ']' 00:07:17.979 13:49:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.979 13:49:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:17.979 13:49:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.979 13:49:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:17.979 13:49:08 -- common/autotest_common.sh@10 -- # set +x 00:07:17.979 [2024-07-23 13:49:08.781925] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:17.979 [2024-07-23 13:49:08.781971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.979 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.979 [2024-07-23 13:49:08.830694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.979 [2024-07-23 13:49:08.901933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.979 [2024-07-23 13:49:08.902039] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.979 [2024-07-23 13:49:08.902054] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.979 [2024-07-23 13:49:08.902060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.979 [2024-07-23 13:49:08.902104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.979 [2024-07-23 13:49:08.902201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.979 [2024-07-23 13:49:08.902263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.979 [2024-07-23 13:49:08.902264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.917 13:49:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:18.917 13:49:09 -- common/autotest_common.sh@852 -- # return 0 00:07:18.917 13:49:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:18.917 13:49:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 13:49:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:18.917 13:49:09 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:18.917 13:49:09 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:18.917 13:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 [2024-07-23 13:49:09.635438] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:18.917 13:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.917 13:49:09 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:18.917 13:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 Malloc1 00:07:18.917 13:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.917 13:49:09 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:18.917 13:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 13:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.917 13:49:09 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:18.917 13:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 13:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.917 13:49:09 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:18.917 13:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 [2024-07-23 13:49:09.777116] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:18.917 13:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.917 13:49:09 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:18.917 13:49:09 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:07:18.917 13:49:09 -- common/autotest_common.sh@1358 -- # local bdev_info 00:07:18.917 13:49:09 -- common/autotest_common.sh@1359 -- # local bs 00:07:18.917 13:49:09 -- common/autotest_common.sh@1360 -- # local nb 00:07:18.917 13:49:09 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:18.917 13:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:18.917 13:49:09 -- common/autotest_common.sh@10 -- # set +x 00:07:18.917 13:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:18.917 13:49:09 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:07:18.917 { 00:07:18.917 "name": "Malloc1", 00:07:18.917 "aliases": [ 00:07:18.917 "295eeaa7-804e-4a9c-ad7d-4d1d748ce90d" 00:07:18.917 ], 00:07:18.917 "product_name": "Malloc disk", 00:07:18.917 "block_size": 512, 00:07:18.917 "num_blocks": 1048576, 00:07:18.917 "uuid": "295eeaa7-804e-4a9c-ad7d-4d1d748ce90d", 00:07:18.917 "assigned_rate_limits": { 00:07:18.917 "rw_ios_per_sec": 0, 00:07:18.917 "rw_mbytes_per_sec": 0, 00:07:18.917 "r_mbytes_per_sec": 0, 00:07:18.917 "w_mbytes_per_sec": 0 00:07:18.917 }, 00:07:18.917 "claimed": true, 00:07:18.917 "claim_type": "exclusive_write", 00:07:18.917 "zoned": false, 00:07:18.917 "supported_io_types": { 00:07:18.917 "read": true, 00:07:18.917 "write": true, 00:07:18.917 "unmap": true, 00:07:18.917 "write_zeroes": true, 00:07:18.917 "flush": true, 00:07:18.917 "reset": true, 00:07:18.917 "compare": false, 00:07:18.918 "compare_and_write": false, 00:07:18.918 "abort": true, 00:07:18.918 "nvme_admin": false, 00:07:18.918 "nvme_io": false 00:07:18.918 }, 00:07:18.918 "memory_domains": [ 00:07:18.918 { 00:07:18.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:18.918 "dma_device_type": 2 00:07:18.918 } 00:07:18.918 ], 00:07:18.918 "driver_specific": {} 00:07:18.918 } 00:07:18.918 ]' 00:07:18.918 13:49:09 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:07:18.918 13:49:09 -- common/autotest_common.sh@1362 -- # bs=512 00:07:18.918 13:49:09 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:07:18.918 13:49:09 -- common/autotest_common.sh@1363 -- # nb=1048576 00:07:18.918 13:49:09 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:07:18.918 13:49:09 -- common/autotest_common.sh@1367 -- # echo 512 00:07:18.918 13:49:09 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:18.918 13:49:09 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:20.296 13:49:11 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:20.297 13:49:11 -- common/autotest_common.sh@1177 -- # local i=0 00:07:20.297 13:49:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:07:20.297 13:49:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:07:20.297 13:49:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:07:22.201 13:49:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:07:22.201 13:49:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:07:22.201 13:49:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:07:22.201 13:49:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:07:22.201 13:49:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:07:22.201 13:49:13 -- common/autotest_common.sh@1187 -- # return 0 00:07:22.201 13:49:13 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:22.202 13:49:13 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:22.202 13:49:13 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:22.202 13:49:13 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:22.202 13:49:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:22.202 13:49:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:22.202 13:49:13 -- setup/common.sh@80 -- # echo 536870912 00:07:22.202 13:49:13 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:22.202 13:49:13 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:22.202 13:49:13 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:22.202 13:49:13 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:22.460 13:49:13 -- target/filesystem.sh@69 -- # partprobe 00:07:22.460 13:49:13 -- target/filesystem.sh@70 -- # sleep 1 00:07:23.839 13:49:14 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:23.839 13:49:14 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:23.839 13:49:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:23.839 13:49:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.839 13:49:14 -- common/autotest_common.sh@10 -- # set +x 00:07:23.839 ************************************ 00:07:23.839 START TEST filesystem_in_capsule_ext4 00:07:23.839 ************************************ 00:07:23.839 13:49:14 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:23.839 13:49:14 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:23.839 13:49:14 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:23.839 13:49:14 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:23.839 13:49:14 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:07:23.839 13:49:14 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:23.839 13:49:14 -- common/autotest_common.sh@904 -- # local i=0 00:07:23.839 13:49:14 -- common/autotest_common.sh@905 -- # local force 00:07:23.839 13:49:14 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:07:23.839 13:49:14 -- common/autotest_common.sh@908 -- # force=-F 00:07:23.839 13:49:14 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:23.839 mke2fs 1.46.5 (30-Dec-2021) 00:07:23.839 Discarding device blocks: 0/522240 done 00:07:23.839 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:23.839 Filesystem UUID: 464a3497-22f6-4c36-ae71-b325b8451105 00:07:23.839 Superblock backups stored on blocks: 00:07:23.839 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:23.839 00:07:23.839 Allocating group tables: 0/64 done 00:07:23.839 Writing inode tables: 0/64 done 00:07:26.374 Creating journal (8192 blocks): done 00:07:26.374 Writing superblocks and filesystem accounting information: 0/64 done 00:07:26.374 00:07:26.374 13:49:17 -- common/autotest_common.sh@921 -- # return 0 00:07:26.374 13:49:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:26.632 13:49:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:26.632 13:49:17 -- target/filesystem.sh@25 -- # sync 00:07:26.632 13:49:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:26.632 13:49:17 -- target/filesystem.sh@27 -- # sync 00:07:26.632 13:49:17 -- target/filesystem.sh@29 -- # i=0 00:07:26.632 13:49:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:26.632 13:49:17 -- target/filesystem.sh@37 -- # kill -0 3111737 00:07:26.632 13:49:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:26.632 13:49:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:26.632 13:49:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:26.632 13:49:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:26.632 00:07:26.632 real 0m3.142s 00:07:26.632 user 0m0.026s 00:07:26.632 sys 0m0.044s 00:07:26.632 13:49:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.632 13:49:17 -- common/autotest_common.sh@10 -- # set +x 00:07:26.632 ************************************ 00:07:26.632 END TEST filesystem_in_capsule_ext4 00:07:26.632 ************************************ 00:07:26.632 13:49:17 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:26.632 13:49:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:26.632 13:49:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.632 13:49:17 -- common/autotest_common.sh@10 -- # set +x 00:07:26.632 ************************************ 00:07:26.632 START TEST filesystem_in_capsule_btrfs 00:07:26.632 ************************************ 00:07:26.632 13:49:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:26.632 13:49:17 -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:26.632 13:49:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:26.632 13:49:17 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:26.632 13:49:17 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:07:26.632 13:49:17 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:26.632 13:49:17 -- common/autotest_common.sh@904 -- # local i=0 00:07:26.632 13:49:17 -- common/autotest_common.sh@905 -- # local force 00:07:26.632 13:49:17 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:07:26.632 13:49:17 -- common/autotest_common.sh@910 -- # force=-f 00:07:26.632 13:49:17 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:26.891 btrfs-progs v6.6.2 00:07:26.891 See https://btrfs.readthedocs.io for more information. 00:07:26.891 00:07:26.891 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:26.891 NOTE: several default settings have changed in version 5.15, please make sure 00:07:26.891 this does not affect your deployments: 00:07:26.891 - DUP for metadata (-m dup) 00:07:26.891 - enabled no-holes (-O no-holes) 00:07:26.891 - enabled free-space-tree (-R free-space-tree) 00:07:26.891 00:07:26.891 Label: (null) 00:07:26.891 UUID: 687f2767-2ae8-457e-ab28-21a416e8152a 00:07:26.891 Node size: 16384 00:07:26.891 Sector size: 4096 00:07:26.891 Filesystem size: 510.00MiB 00:07:26.891 Block group profiles: 00:07:26.891 Data: single 8.00MiB 00:07:26.891 Metadata: DUP 32.00MiB 00:07:26.891 System: DUP 8.00MiB 00:07:26.891 SSD detected: yes 00:07:26.891 Zoned device: no 00:07:26.891 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:26.891 Runtime features: free-space-tree 00:07:26.891 Checksum: crc32c 00:07:26.891 Number of devices: 1 00:07:26.891 Devices: 00:07:26.891 ID SIZE PATH 00:07:26.891 1 510.00MiB /dev/nvme0n1p1 00:07:26.891 00:07:26.891 13:49:17 -- common/autotest_common.sh@921 -- # return 0 00:07:26.891 13:49:17 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:27.492 13:49:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:27.492 13:49:18 -- target/filesystem.sh@25 -- # sync 00:07:27.492 13:49:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:27.492 13:49:18 -- target/filesystem.sh@27 -- # sync 00:07:27.492 13:49:18 -- target/filesystem.sh@29 -- # i=0 00:07:27.492 13:49:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:27.492 13:49:18 -- target/filesystem.sh@37 -- # kill -0 3111737 00:07:27.492 13:49:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:27.492 13:49:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:27.492 13:49:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:27.492 13:49:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:27.492 00:07:27.492 real 0m0.838s 00:07:27.492 user 0m0.025s 00:07:27.492 sys 0m0.054s 00:07:27.492 13:49:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.492 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.492 ************************************ 00:07:27.492 END TEST filesystem_in_capsule_btrfs 00:07:27.492 ************************************ 00:07:27.751 13:49:18 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:27.751 13:49:18 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:27.751 13:49:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:27.752 13:49:18 -- common/autotest_common.sh@10 -- # set +x 00:07:27.752 ************************************ 00:07:27.752 START TEST filesystem_in_capsule_xfs 00:07:27.752 ************************************ 00:07:27.752 13:49:18 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:07:27.752 13:49:18 -- target/filesystem.sh@18 -- # fstype=xfs 00:07:27.752 13:49:18 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:27.752 13:49:18 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:27.752 13:49:18 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:07:27.752 13:49:18 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:07:27.752 13:49:18 -- common/autotest_common.sh@904 -- # local i=0 00:07:27.752 13:49:18 -- common/autotest_common.sh@905 -- # local force 00:07:27.752 13:49:18 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:07:27.752 13:49:18 -- common/autotest_common.sh@910 -- # force=-f 00:07:27.752 13:49:18 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:27.752 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:27.752 = sectsz=512 attr=2, projid32bit=1 00:07:27.752 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:27.752 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:27.752 data = bsize=4096 blocks=130560, imaxpct=25 00:07:27.752 = sunit=0 swidth=0 blks 00:07:27.752 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:27.752 log =internal log bsize=4096 blocks=16384, version=2 00:07:27.752 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:27.752 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:28.688 Discarding blocks...Done. 00:07:28.688 13:49:19 -- common/autotest_common.sh@921 -- # return 0 00:07:28.688 13:49:19 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:30.589 13:49:21 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:30.589 13:49:21 -- target/filesystem.sh@25 -- # sync 00:07:30.589 13:49:21 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:30.589 13:49:21 -- target/filesystem.sh@27 -- # sync 00:07:30.589 13:49:21 -- target/filesystem.sh@29 -- # i=0 00:07:30.589 13:49:21 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:30.589 13:49:21 -- target/filesystem.sh@37 -- # kill -0 3111737 00:07:30.589 13:49:21 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:30.589 13:49:21 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:30.589 13:49:21 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:30.589 13:49:21 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:30.589 00:07:30.589 real 0m2.998s 00:07:30.589 user 0m0.021s 00:07:30.589 sys 0m0.052s 00:07:30.589 13:49:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.589 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:07:30.589 ************************************ 00:07:30.589 END TEST filesystem_in_capsule_xfs 00:07:30.589 ************************************ 00:07:30.589 13:49:21 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:30.848 13:49:21 -- target/filesystem.sh@93 -- # sync 00:07:30.848 13:49:21 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:31.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.108 13:49:21 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:31.108 13:49:21 -- common/autotest_common.sh@1198 -- # local i=0 00:07:31.108 13:49:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:07:31.108 13:49:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.108 13:49:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:07:31.108 13:49:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:31.108 13:49:21 -- common/autotest_common.sh@1210 -- # return 0 00:07:31.108 13:49:21 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:31.108 13:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:31.108 13:49:21 -- common/autotest_common.sh@10 -- # set +x 00:07:31.108 13:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:31.108 13:49:21 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:31.109 13:49:21 -- target/filesystem.sh@101 -- # killprocess 3111737 00:07:31.109 13:49:21 -- common/autotest_common.sh@926 -- # '[' -z 3111737 ']' 00:07:31.109 13:49:21 -- common/autotest_common.sh@930 -- # kill -0 3111737 00:07:31.109 13:49:21 -- common/autotest_common.sh@931 -- # uname 00:07:31.109 13:49:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:31.109 13:49:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3111737 00:07:31.109 13:49:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:31.109 13:49:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:31.109 13:49:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3111737' 00:07:31.109 killing process with pid 3111737 00:07:31.109 13:49:22 -- common/autotest_common.sh@945 -- # kill 3111737 00:07:31.109 13:49:22 -- common/autotest_common.sh@950 -- # wait 3111737 00:07:31.677 13:49:22 -- target/filesystem.sh@102 -- # nvmfpid= 00:07:31.677 00:07:31.677 real 0m13.646s 00:07:31.677 user 0m53.585s 00:07:31.677 sys 0m1.093s 00:07:31.677 13:49:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.677 13:49:22 -- common/autotest_common.sh@10 -- # set +x 00:07:31.677 ************************************ 00:07:31.677 END TEST nvmf_filesystem_in_capsule 00:07:31.677 ************************************ 00:07:31.677 13:49:22 -- target/filesystem.sh@108 -- # nvmftestfini 00:07:31.678 13:49:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:31.678 13:49:22 -- nvmf/common.sh@116 -- # sync 00:07:31.678 13:49:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:31.678 13:49:22 -- nvmf/common.sh@119 -- # set +e 00:07:31.678 13:49:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:31.678 13:49:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:31.678 rmmod nvme_tcp 00:07:31.678 rmmod nvme_fabrics 00:07:31.678 rmmod nvme_keyring 00:07:31.678 13:49:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:31.678 13:49:22 -- nvmf/common.sh@123 -- # set -e 00:07:31.678 13:49:22 -- nvmf/common.sh@124 -- # return 0 00:07:31.678 13:49:22 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:07:31.678 13:49:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:31.678 13:49:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:31.678 13:49:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:31.678 13:49:22 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.678 13:49:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:31.678 13:49:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.678 13:49:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:31.678 13:49:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.585 13:49:24 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:33.585 00:07:33.585 real 0m34.627s 00:07:33.585 user 1m46.196s 00:07:33.585 sys 0m6.418s 00:07:33.585 13:49:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.585 13:49:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.585 ************************************ 00:07:33.585 END TEST nvmf_filesystem 00:07:33.585 ************************************ 00:07:33.585 13:49:24 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.585 13:49:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:33.585 13:49:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:33.585 13:49:24 -- common/autotest_common.sh@10 -- # set +x 00:07:33.585 ************************************ 00:07:33.585 START TEST nvmf_discovery 00:07:33.585 ************************************ 00:07:33.585 13:49:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:33.845 * Looking for test storage... 00:07:33.845 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:33.845 13:49:24 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:33.845 13:49:24 -- nvmf/common.sh@7 -- # uname -s 00:07:33.845 13:49:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:33.845 13:49:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:33.845 13:49:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:33.845 13:49:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:33.845 13:49:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:33.845 13:49:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:33.845 13:49:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:33.845 13:49:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:33.845 13:49:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:33.845 13:49:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:33.845 13:49:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.845 13:49:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:33.845 13:49:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:33.845 13:49:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:33.845 13:49:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:33.845 13:49:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:33.845 13:49:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:33.845 13:49:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:33.845 13:49:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:33.845 13:49:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.845 13:49:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.845 13:49:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.845 13:49:24 -- paths/export.sh@5 -- # export PATH 00:07:33.845 13:49:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:33.845 13:49:24 -- nvmf/common.sh@46 -- # : 0 00:07:33.845 13:49:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:33.845 13:49:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:33.845 13:49:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:33.845 13:49:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:33.845 13:49:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:33.845 13:49:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:33.845 13:49:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:33.845 13:49:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:33.845 13:49:24 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:33.845 13:49:24 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:33.845 13:49:24 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:33.845 13:49:24 -- target/discovery.sh@15 -- # hash nvme 00:07:33.845 13:49:24 -- target/discovery.sh@20 -- # nvmftestinit 00:07:33.845 13:49:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:33.845 13:49:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:33.845 13:49:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:33.846 13:49:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:33.846 13:49:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:33.846 13:49:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:33.846 13:49:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:33.846 13:49:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:33.846 13:49:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:33.846 13:49:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:33.846 13:49:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:33.846 13:49:24 -- common/autotest_common.sh@10 -- # set +x 00:07:39.124 13:49:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:39.124 13:49:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:39.124 13:49:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:39.124 13:49:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:39.124 13:49:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:39.124 13:49:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:39.124 13:49:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:39.124 13:49:29 -- nvmf/common.sh@294 -- # net_devs=() 00:07:39.124 13:49:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:39.124 13:49:29 -- nvmf/common.sh@295 -- # e810=() 00:07:39.124 13:49:29 -- nvmf/common.sh@295 -- # local -ga e810 00:07:39.124 13:49:29 -- nvmf/common.sh@296 -- # x722=() 00:07:39.124 13:49:29 -- nvmf/common.sh@296 -- # local -ga x722 00:07:39.124 13:49:29 -- nvmf/common.sh@297 -- # mlx=() 00:07:39.124 13:49:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:39.124 13:49:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:39.124 13:49:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:39.124 13:49:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:39.124 13:49:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:39.124 13:49:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:39.124 13:49:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:39.124 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:39.124 13:49:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:39.124 13:49:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:39.124 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:39.124 13:49:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:39.124 13:49:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:39.124 13:49:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.124 13:49:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:39.124 13:49:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.124 13:49:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:39.124 Found net devices under 0000:86:00.0: cvl_0_0 00:07:39.124 13:49:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.124 13:49:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:39.124 13:49:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:39.124 13:49:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:39.124 13:49:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:39.124 13:49:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:39.124 Found net devices under 0000:86:00.1: cvl_0_1 00:07:39.124 13:49:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:39.124 13:49:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:39.124 13:49:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:39.124 13:49:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:39.124 13:49:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:39.124 13:49:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:39.124 13:49:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:39.124 13:49:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:39.124 13:49:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:39.124 13:49:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:39.124 13:49:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:39.124 13:49:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:39.124 13:49:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:39.124 13:49:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:39.124 13:49:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:39.124 13:49:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:39.124 13:49:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:39.124 13:49:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:39.124 13:49:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:39.124 13:49:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:39.124 13:49:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:39.124 13:49:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:39.383 13:49:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:39.383 13:49:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:39.383 13:49:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:39.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:39.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:07:39.383 00:07:39.383 --- 10.0.0.2 ping statistics --- 00:07:39.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.383 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:39.383 13:49:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:39.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:39.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.253 ms 00:07:39.383 00:07:39.383 --- 10.0.0.1 ping statistics --- 00:07:39.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:39.383 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:07:39.383 13:49:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:39.383 13:49:30 -- nvmf/common.sh@410 -- # return 0 00:07:39.383 13:49:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:39.383 13:49:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:39.383 13:49:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:39.383 13:49:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:39.383 13:49:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:39.383 13:49:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:39.383 13:49:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:39.383 13:49:30 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:39.383 13:49:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:39.383 13:49:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:39.383 13:49:30 -- common/autotest_common.sh@10 -- # set +x 00:07:39.383 13:49:30 -- nvmf/common.sh@469 -- # nvmfpid=3117820 00:07:39.383 13:49:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:39.383 13:49:30 -- nvmf/common.sh@470 -- # waitforlisten 3117820 00:07:39.383 13:49:30 -- common/autotest_common.sh@819 -- # '[' -z 3117820 ']' 00:07:39.383 13:49:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.383 13:49:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:39.383 13:49:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.383 13:49:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:39.383 13:49:30 -- common/autotest_common.sh@10 -- # set +x 00:07:39.383 [2024-07-23 13:49:30.286587] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:39.383 [2024-07-23 13:49:30.286631] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:39.383 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.383 [2024-07-23 13:49:30.345824] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:39.642 [2024-07-23 13:49:30.419643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.642 [2024-07-23 13:49:30.419757] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:39.642 [2024-07-23 13:49:30.419765] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:39.642 [2024-07-23 13:49:30.419771] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:39.642 [2024-07-23 13:49:30.419816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.642 [2024-07-23 13:49:30.419911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.642 [2024-07-23 13:49:30.420001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.642 [2024-07-23 13:49:30.420003] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.211 13:49:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:40.211 13:49:31 -- common/autotest_common.sh@852 -- # return 0 00:07:40.212 13:49:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:40.212 13:49:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 13:49:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:40.212 13:49:31 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 [2024-07-23 13:49:31.167467] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:40.212 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.212 13:49:31 -- target/discovery.sh@26 -- # seq 1 4 00:07:40.212 13:49:31 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.212 13:49:31 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 Null1 00:07:40.212 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.212 13:49:31 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.212 13:49:31 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.212 13:49:31 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 [2024-07-23 13:49:31.212977] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:40.212 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.212 13:49:31 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.212 13:49:31 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.212 Null2 00:07:40.212 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.212 13:49:31 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:40.212 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.212 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.471 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.472 13:49:31 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 Null3 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:40.472 13:49:31 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 Null4 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.472 13:49:31 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:07:40.472 00:07:40.472 Discovery Log Number of Records 6, Generation counter 6 00:07:40.472 =====Discovery Log Entry 0====== 00:07:40.472 trtype: tcp 00:07:40.472 adrfam: ipv4 00:07:40.472 subtype: current discovery subsystem 00:07:40.472 treq: not required 00:07:40.472 portid: 0 00:07:40.472 trsvcid: 4420 00:07:40.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:40.472 traddr: 10.0.0.2 00:07:40.472 eflags: explicit discovery connections, duplicate discovery information 00:07:40.472 sectype: none 00:07:40.472 =====Discovery Log Entry 1====== 00:07:40.472 trtype: tcp 00:07:40.472 adrfam: ipv4 00:07:40.472 subtype: nvme subsystem 00:07:40.472 treq: not required 00:07:40.472 portid: 0 00:07:40.472 trsvcid: 4420 00:07:40.472 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:40.472 traddr: 10.0.0.2 00:07:40.472 eflags: none 00:07:40.472 sectype: none 00:07:40.472 =====Discovery Log Entry 2====== 00:07:40.472 trtype: tcp 00:07:40.472 adrfam: ipv4 00:07:40.472 subtype: nvme subsystem 00:07:40.472 treq: not required 00:07:40.472 portid: 0 00:07:40.472 trsvcid: 4420 00:07:40.472 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:40.472 traddr: 10.0.0.2 00:07:40.472 eflags: none 00:07:40.472 sectype: none 00:07:40.472 =====Discovery Log Entry 3====== 00:07:40.472 trtype: tcp 00:07:40.472 adrfam: ipv4 00:07:40.472 subtype: nvme subsystem 00:07:40.472 treq: not required 00:07:40.472 portid: 0 00:07:40.472 trsvcid: 4420 00:07:40.472 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:40.472 traddr: 10.0.0.2 00:07:40.472 eflags: none 00:07:40.472 sectype: none 00:07:40.472 =====Discovery Log Entry 4====== 00:07:40.472 trtype: tcp 00:07:40.472 adrfam: ipv4 00:07:40.472 subtype: nvme subsystem 00:07:40.472 treq: not required 00:07:40.472 portid: 0 00:07:40.472 trsvcid: 4420 00:07:40.472 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:40.472 traddr: 10.0.0.2 00:07:40.472 eflags: none 00:07:40.472 sectype: none 00:07:40.472 =====Discovery Log Entry 5====== 00:07:40.472 trtype: tcp 00:07:40.472 adrfam: ipv4 00:07:40.472 subtype: discovery subsystem referral 00:07:40.472 treq: not required 00:07:40.472 portid: 0 00:07:40.472 trsvcid: 4430 00:07:40.472 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:40.472 traddr: 10.0.0.2 00:07:40.472 eflags: none 00:07:40.472 sectype: none 00:07:40.472 13:49:31 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:40.472 Perform nvmf subsystem discovery via RPC 00:07:40.472 13:49:31 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:40.472 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.472 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.472 [2024-07-23 13:49:31.393415] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:07:40.472 [ 00:07:40.472 { 00:07:40.472 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:40.472 "subtype": "Discovery", 00:07:40.472 "listen_addresses": [ 00:07:40.472 { 00:07:40.472 "transport": "TCP", 00:07:40.472 "trtype": "TCP", 00:07:40.472 "adrfam": "IPv4", 00:07:40.472 "traddr": "10.0.0.2", 00:07:40.472 "trsvcid": "4420" 00:07:40.472 } 00:07:40.472 ], 00:07:40.472 "allow_any_host": true, 00:07:40.472 "hosts": [] 00:07:40.472 }, 00:07:40.472 { 00:07:40.472 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:40.472 "subtype": "NVMe", 00:07:40.472 "listen_addresses": [ 00:07:40.472 { 00:07:40.472 "transport": "TCP", 00:07:40.472 "trtype": "TCP", 00:07:40.472 "adrfam": "IPv4", 00:07:40.472 "traddr": "10.0.0.2", 00:07:40.472 "trsvcid": "4420" 00:07:40.472 } 00:07:40.472 ], 00:07:40.472 "allow_any_host": true, 00:07:40.472 "hosts": [], 00:07:40.472 "serial_number": "SPDK00000000000001", 00:07:40.472 "model_number": "SPDK bdev Controller", 00:07:40.472 "max_namespaces": 32, 00:07:40.472 "min_cntlid": 1, 00:07:40.472 "max_cntlid": 65519, 00:07:40.472 "namespaces": [ 00:07:40.472 { 00:07:40.472 "nsid": 1, 00:07:40.472 "bdev_name": "Null1", 00:07:40.472 "name": "Null1", 00:07:40.472 "nguid": "205EBFD817D342B1BAB74817B6C929A2", 00:07:40.472 "uuid": "205ebfd8-17d3-42b1-bab7-4817b6c929a2" 00:07:40.472 } 00:07:40.472 ] 00:07:40.472 }, 00:07:40.472 { 00:07:40.472 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:40.472 "subtype": "NVMe", 00:07:40.472 "listen_addresses": [ 00:07:40.472 { 00:07:40.472 "transport": "TCP", 00:07:40.472 "trtype": "TCP", 00:07:40.472 "adrfam": "IPv4", 00:07:40.472 "traddr": "10.0.0.2", 00:07:40.472 "trsvcid": "4420" 00:07:40.472 } 00:07:40.472 ], 00:07:40.472 "allow_any_host": true, 00:07:40.472 "hosts": [], 00:07:40.472 "serial_number": "SPDK00000000000002", 00:07:40.472 "model_number": "SPDK bdev Controller", 00:07:40.472 "max_namespaces": 32, 00:07:40.472 "min_cntlid": 1, 00:07:40.472 "max_cntlid": 65519, 00:07:40.472 "namespaces": [ 00:07:40.472 { 00:07:40.472 "nsid": 1, 00:07:40.472 "bdev_name": "Null2", 00:07:40.472 "name": "Null2", 00:07:40.472 "nguid": "A85D6B56820C4B56B216ADFBCDC2C082", 00:07:40.472 "uuid": "a85d6b56-820c-4b56-b216-adfbcdc2c082" 00:07:40.472 } 00:07:40.472 ] 00:07:40.472 }, 00:07:40.472 { 00:07:40.472 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:40.472 "subtype": "NVMe", 00:07:40.472 "listen_addresses": [ 00:07:40.472 { 00:07:40.472 "transport": "TCP", 00:07:40.472 "trtype": "TCP", 00:07:40.472 "adrfam": "IPv4", 00:07:40.472 "traddr": "10.0.0.2", 00:07:40.472 "trsvcid": "4420" 00:07:40.472 } 00:07:40.473 ], 00:07:40.473 "allow_any_host": true, 00:07:40.473 "hosts": [], 00:07:40.473 "serial_number": "SPDK00000000000003", 00:07:40.473 "model_number": "SPDK bdev Controller", 00:07:40.473 "max_namespaces": 32, 00:07:40.473 "min_cntlid": 1, 00:07:40.473 "max_cntlid": 65519, 00:07:40.473 "namespaces": [ 00:07:40.473 { 00:07:40.473 "nsid": 1, 00:07:40.473 "bdev_name": "Null3", 00:07:40.473 "name": "Null3", 00:07:40.473 "nguid": "6D9D36FA69E946D7BFAD63EE8CB245B8", 00:07:40.473 "uuid": "6d9d36fa-69e9-46d7-bfad-63ee8cb245b8" 00:07:40.473 } 00:07:40.473 ] 00:07:40.473 }, 00:07:40.473 { 00:07:40.473 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:40.473 "subtype": "NVMe", 00:07:40.473 "listen_addresses": [ 00:07:40.473 { 00:07:40.473 "transport": "TCP", 00:07:40.473 "trtype": "TCP", 00:07:40.473 "adrfam": "IPv4", 00:07:40.473 "traddr": "10.0.0.2", 00:07:40.473 "trsvcid": "4420" 00:07:40.473 } 00:07:40.473 ], 00:07:40.473 "allow_any_host": true, 00:07:40.473 "hosts": [], 00:07:40.473 "serial_number": "SPDK00000000000004", 00:07:40.473 "model_number": "SPDK bdev Controller", 00:07:40.473 "max_namespaces": 32, 00:07:40.473 "min_cntlid": 1, 00:07:40.473 "max_cntlid": 65519, 00:07:40.473 "namespaces": [ 00:07:40.473 { 00:07:40.473 "nsid": 1, 00:07:40.473 "bdev_name": "Null4", 00:07:40.473 "name": "Null4", 00:07:40.473 "nguid": "F30AE625196D4411B24371BEF7F38A70", 00:07:40.473 "uuid": "f30ae625-196d-4411-b243-71bef7f38a70" 00:07:40.473 } 00:07:40.473 ] 00:07:40.473 } 00:07:40.473 ] 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@42 -- # seq 1 4 00:07:40.473 13:49:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.473 13:49:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.473 13:49:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.473 13:49:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:40.473 13:49:31 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.473 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.473 13:49:31 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:40.473 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.473 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.733 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.733 13:49:31 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:40.733 13:49:31 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:40.733 13:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.733 13:49:31 -- common/autotest_common.sh@10 -- # set +x 00:07:40.733 13:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.733 13:49:31 -- target/discovery.sh@49 -- # check_bdevs= 00:07:40.733 13:49:31 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:40.733 13:49:31 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:40.733 13:49:31 -- target/discovery.sh@57 -- # nvmftestfini 00:07:40.733 13:49:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:40.733 13:49:31 -- nvmf/common.sh@116 -- # sync 00:07:40.733 13:49:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:40.733 13:49:31 -- nvmf/common.sh@119 -- # set +e 00:07:40.733 13:49:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:40.733 13:49:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:40.733 rmmod nvme_tcp 00:07:40.733 rmmod nvme_fabrics 00:07:40.733 rmmod nvme_keyring 00:07:40.733 13:49:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:40.733 13:49:31 -- nvmf/common.sh@123 -- # set -e 00:07:40.733 13:49:31 -- nvmf/common.sh@124 -- # return 0 00:07:40.733 13:49:31 -- nvmf/common.sh@477 -- # '[' -n 3117820 ']' 00:07:40.733 13:49:31 -- nvmf/common.sh@478 -- # killprocess 3117820 00:07:40.733 13:49:31 -- common/autotest_common.sh@926 -- # '[' -z 3117820 ']' 00:07:40.733 13:49:31 -- common/autotest_common.sh@930 -- # kill -0 3117820 00:07:40.733 13:49:31 -- common/autotest_common.sh@931 -- # uname 00:07:40.733 13:49:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:40.733 13:49:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3117820 00:07:40.733 13:49:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:40.733 13:49:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:40.733 13:49:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3117820' 00:07:40.733 killing process with pid 3117820 00:07:40.733 13:49:31 -- common/autotest_common.sh@945 -- # kill 3117820 00:07:40.733 [2024-07-23 13:49:31.635461] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:07:40.733 13:49:31 -- common/autotest_common.sh@950 -- # wait 3117820 00:07:40.993 13:49:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:40.993 13:49:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:40.993 13:49:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:40.993 13:49:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:40.993 13:49:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:40.993 13:49:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.993 13:49:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:40.993 13:49:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.899 13:49:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:42.899 00:07:42.899 real 0m9.335s 00:07:42.899 user 0m7.359s 00:07:42.899 sys 0m4.497s 00:07:42.899 13:49:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.899 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:07:42.899 ************************************ 00:07:42.899 END TEST nvmf_discovery 00:07:42.899 ************************************ 00:07:43.159 13:49:33 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:43.159 13:49:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:43.159 13:49:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.159 13:49:33 -- common/autotest_common.sh@10 -- # set +x 00:07:43.159 ************************************ 00:07:43.159 START TEST nvmf_referrals 00:07:43.159 ************************************ 00:07:43.159 13:49:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:43.159 * Looking for test storage... 00:07:43.159 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:43.159 13:49:34 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:43.159 13:49:34 -- nvmf/common.sh@7 -- # uname -s 00:07:43.159 13:49:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:43.159 13:49:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:43.159 13:49:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:43.159 13:49:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:43.159 13:49:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:43.159 13:49:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:43.159 13:49:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:43.159 13:49:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:43.159 13:49:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:43.159 13:49:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:43.159 13:49:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:43.159 13:49:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:43.159 13:49:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:43.159 13:49:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:43.159 13:49:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:43.159 13:49:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:43.159 13:49:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.159 13:49:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.159 13:49:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.159 13:49:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.159 13:49:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.159 13:49:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.159 13:49:34 -- paths/export.sh@5 -- # export PATH 00:07:43.159 13:49:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.159 13:49:34 -- nvmf/common.sh@46 -- # : 0 00:07:43.159 13:49:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:43.159 13:49:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:43.159 13:49:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:43.159 13:49:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:43.159 13:49:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:43.159 13:49:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:43.159 13:49:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:43.159 13:49:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:43.159 13:49:34 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:43.159 13:49:34 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:43.159 13:49:34 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:43.159 13:49:34 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:43.159 13:49:34 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:43.159 13:49:34 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:43.159 13:49:34 -- target/referrals.sh@37 -- # nvmftestinit 00:07:43.159 13:49:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:43.159 13:49:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:43.159 13:49:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:43.159 13:49:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:43.159 13:49:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:43.159 13:49:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:43.159 13:49:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:43.159 13:49:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.159 13:49:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:43.159 13:49:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:43.159 13:49:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:43.159 13:49:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.446 13:49:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:48.446 13:49:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:48.446 13:49:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:48.446 13:49:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:48.446 13:49:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:48.446 13:49:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:48.446 13:49:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:48.446 13:49:39 -- nvmf/common.sh@294 -- # net_devs=() 00:07:48.446 13:49:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:48.446 13:49:39 -- nvmf/common.sh@295 -- # e810=() 00:07:48.446 13:49:39 -- nvmf/common.sh@295 -- # local -ga e810 00:07:48.446 13:49:39 -- nvmf/common.sh@296 -- # x722=() 00:07:48.446 13:49:39 -- nvmf/common.sh@296 -- # local -ga x722 00:07:48.446 13:49:39 -- nvmf/common.sh@297 -- # mlx=() 00:07:48.446 13:49:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:48.446 13:49:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.446 13:49:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.447 13:49:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.447 13:49:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.447 13:49:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.447 13:49:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:48.447 13:49:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:48.447 13:49:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:48.447 13:49:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.447 13:49:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:48.447 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:48.447 13:49:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:48.447 13:49:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:48.447 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:48.447 13:49:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:48.447 13:49:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.447 13:49:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.447 13:49:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.447 13:49:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.447 13:49:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:48.447 Found net devices under 0000:86:00.0: cvl_0_0 00:07:48.447 13:49:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.447 13:49:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:48.447 13:49:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.447 13:49:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:48.447 13:49:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.447 13:49:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:48.447 Found net devices under 0000:86:00.1: cvl_0_1 00:07:48.447 13:49:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.447 13:49:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:48.447 13:49:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:48.447 13:49:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:48.447 13:49:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.447 13:49:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.447 13:49:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.447 13:49:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:48.447 13:49:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.447 13:49:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.447 13:49:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:48.447 13:49:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.447 13:49:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.447 13:49:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:48.447 13:49:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:48.447 13:49:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.447 13:49:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.447 13:49:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.447 13:49:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.447 13:49:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:48.447 13:49:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.447 13:49:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.447 13:49:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.447 13:49:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:48.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:07:48.447 00:07:48.447 --- 10.0.0.2 ping statistics --- 00:07:48.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.447 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:07:48.447 13:49:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:07:48.447 00:07:48.447 --- 10.0.0.1 ping statistics --- 00:07:48.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.447 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:07:48.447 13:49:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.447 13:49:39 -- nvmf/common.sh@410 -- # return 0 00:07:48.447 13:49:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:48.447 13:49:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.447 13:49:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:48.447 13:49:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.447 13:49:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:48.447 13:49:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:48.447 13:49:39 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:48.447 13:49:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:48.447 13:49:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:48.447 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:07:48.447 13:49:39 -- nvmf/common.sh@469 -- # nvmfpid=3121405 00:07:48.447 13:49:39 -- nvmf/common.sh@470 -- # waitforlisten 3121405 00:07:48.447 13:49:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:48.447 13:49:39 -- common/autotest_common.sh@819 -- # '[' -z 3121405 ']' 00:07:48.447 13:49:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.447 13:49:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:48.447 13:49:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.448 13:49:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:48.448 13:49:39 -- common/autotest_common.sh@10 -- # set +x 00:07:48.448 [2024-07-23 13:49:39.421872] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:48.448 [2024-07-23 13:49:39.421916] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.448 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.707 [2024-07-23 13:49:39.480085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.707 [2024-07-23 13:49:39.558423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:48.707 [2024-07-23 13:49:39.558529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.707 [2024-07-23 13:49:39.558537] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.707 [2024-07-23 13:49:39.558544] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.707 [2024-07-23 13:49:39.558589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.707 [2024-07-23 13:49:39.558687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.707 [2024-07-23 13:49:39.558749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.707 [2024-07-23 13:49:39.558751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.274 13:49:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:49.274 13:49:40 -- common/autotest_common.sh@852 -- # return 0 00:07:49.274 13:49:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:49.274 13:49:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:49.274 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.274 13:49:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:49.274 13:49:40 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:49.274 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.274 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.274 [2024-07-23 13:49:40.262391] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:49.274 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.274 13:49:40 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:49.274 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.274 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.274 [2024-07-23 13:49:40.275723] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:49.274 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.274 13:49:40 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:49.274 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.274 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.274 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.274 13:49:40 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:49.274 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.274 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.534 13:49:40 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:49.534 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.534 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.534 13:49:40 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.534 13:49:40 -- target/referrals.sh@48 -- # jq length 00:07:49.534 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.534 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.534 13:49:40 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:49.534 13:49:40 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:49.534 13:49:40 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.534 13:49:40 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.534 13:49:40 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.534 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.534 13:49:40 -- target/referrals.sh@21 -- # sort 00:07:49.534 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.534 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.534 13:49:40 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:49.534 13:49:40 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:49.534 13:49:40 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:49.534 13:49:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.534 13:49:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.534 13:49:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.534 13:49:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.534 13:49:40 -- target/referrals.sh@26 -- # sort 00:07:49.793 13:49:40 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:49.793 13:49:40 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:49.793 13:49:40 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:49.793 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.793 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.793 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.793 13:49:40 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:49.793 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.793 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.793 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.793 13:49:40 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:49.793 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.793 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.793 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.793 13:49:40 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.793 13:49:40 -- target/referrals.sh@56 -- # jq length 00:07:49.793 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.793 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.793 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.793 13:49:40 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:49.793 13:49:40 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:49.793 13:49:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.793 13:49:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.793 13:49:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.793 13:49:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.794 13:49:40 -- target/referrals.sh@26 -- # sort 00:07:49.794 13:49:40 -- target/referrals.sh@26 -- # echo 00:07:49.794 13:49:40 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:49.794 13:49:40 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:49.794 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.794 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.794 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.794 13:49:40 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:49.794 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.794 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.794 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.794 13:49:40 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:49.794 13:49:40 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:49.794 13:49:40 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:49.794 13:49:40 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:49.794 13:49:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:49.794 13:49:40 -- target/referrals.sh@21 -- # sort 00:07:49.794 13:49:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.794 13:49:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:49.794 13:49:40 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:49.794 13:49:40 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:49.794 13:49:40 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:49.794 13:49:40 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:49.794 13:49:40 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:49.794 13:49:40 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:49.794 13:49:40 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:49.794 13:49:40 -- target/referrals.sh@26 -- # sort 00:07:50.053 13:49:40 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:50.053 13:49:40 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:50.053 13:49:40 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:50.053 13:49:40 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:50.053 13:49:40 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:50.053 13:49:40 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.053 13:49:40 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:50.053 13:49:40 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:50.053 13:49:40 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:50.053 13:49:40 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:50.053 13:49:40 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:50.053 13:49:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.053 13:49:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:50.311 13:49:41 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:50.311 13:49:41 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:50.311 13:49:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.311 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:07:50.311 13:49:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.311 13:49:41 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:50.311 13:49:41 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:50.311 13:49:41 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.311 13:49:41 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:50.311 13:49:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.311 13:49:41 -- target/referrals.sh@21 -- # sort 00:07:50.311 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:07:50.311 13:49:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.311 13:49:41 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:50.311 13:49:41 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:50.311 13:49:41 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:50.311 13:49:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.311 13:49:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.311 13:49:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.311 13:49:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.311 13:49:41 -- target/referrals.sh@26 -- # sort 00:07:50.311 13:49:41 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:50.311 13:49:41 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:50.311 13:49:41 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:50.311 13:49:41 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:50.311 13:49:41 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:50.311 13:49:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.311 13:49:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:50.570 13:49:41 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:50.570 13:49:41 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:50.570 13:49:41 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:50.570 13:49:41 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:50.570 13:49:41 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.570 13:49:41 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:50.570 13:49:41 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:50.570 13:49:41 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:50.570 13:49:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.570 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:07:50.570 13:49:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.570 13:49:41 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:50.570 13:49:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:50.570 13:49:41 -- target/referrals.sh@82 -- # jq length 00:07:50.570 13:49:41 -- common/autotest_common.sh@10 -- # set +x 00:07:50.570 13:49:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:50.570 13:49:41 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:50.570 13:49:41 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:50.570 13:49:41 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:50.570 13:49:41 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:50.570 13:49:41 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:50.570 13:49:41 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:50.570 13:49:41 -- target/referrals.sh@26 -- # sort 00:07:50.570 13:49:41 -- target/referrals.sh@26 -- # echo 00:07:50.570 13:49:41 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:50.570 13:49:41 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:50.570 13:49:41 -- target/referrals.sh@86 -- # nvmftestfini 00:07:50.570 13:49:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:50.570 13:49:41 -- nvmf/common.sh@116 -- # sync 00:07:50.570 13:49:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:50.570 13:49:41 -- nvmf/common.sh@119 -- # set +e 00:07:50.570 13:49:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:50.570 13:49:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:50.829 rmmod nvme_tcp 00:07:50.829 rmmod nvme_fabrics 00:07:50.829 rmmod nvme_keyring 00:07:50.829 13:49:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:50.829 13:49:41 -- nvmf/common.sh@123 -- # set -e 00:07:50.829 13:49:41 -- nvmf/common.sh@124 -- # return 0 00:07:50.829 13:49:41 -- nvmf/common.sh@477 -- # '[' -n 3121405 ']' 00:07:50.829 13:49:41 -- nvmf/common.sh@478 -- # killprocess 3121405 00:07:50.829 13:49:41 -- common/autotest_common.sh@926 -- # '[' -z 3121405 ']' 00:07:50.829 13:49:41 -- common/autotest_common.sh@930 -- # kill -0 3121405 00:07:50.829 13:49:41 -- common/autotest_common.sh@931 -- # uname 00:07:50.829 13:49:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:50.829 13:49:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3121405 00:07:50.829 13:49:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:50.829 13:49:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:50.829 13:49:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3121405' 00:07:50.829 killing process with pid 3121405 00:07:50.829 13:49:41 -- common/autotest_common.sh@945 -- # kill 3121405 00:07:50.829 13:49:41 -- common/autotest_common.sh@950 -- # wait 3121405 00:07:51.089 13:49:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:51.089 13:49:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:51.089 13:49:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:51.089 13:49:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:51.089 13:49:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:51.089 13:49:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.089 13:49:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:51.089 13:49:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.023 13:49:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:53.023 00:07:53.023 real 0m9.991s 00:07:53.023 user 0m11.461s 00:07:53.023 sys 0m4.504s 00:07:53.023 13:49:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.023 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.023 ************************************ 00:07:53.023 END TEST nvmf_referrals 00:07:53.023 ************************************ 00:07:53.023 13:49:43 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:53.023 13:49:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:53.023 13:49:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.023 13:49:43 -- common/autotest_common.sh@10 -- # set +x 00:07:53.023 ************************************ 00:07:53.023 START TEST nvmf_connect_disconnect 00:07:53.023 ************************************ 00:07:53.023 13:49:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:53.283 * Looking for test storage... 00:07:53.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:53.283 13:49:44 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:53.283 13:49:44 -- nvmf/common.sh@7 -- # uname -s 00:07:53.283 13:49:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:53.283 13:49:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:53.283 13:49:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:53.283 13:49:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:53.283 13:49:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:53.283 13:49:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:53.283 13:49:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:53.283 13:49:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:53.283 13:49:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:53.283 13:49:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:53.283 13:49:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.283 13:49:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:53.283 13:49:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:53.283 13:49:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:53.283 13:49:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:53.283 13:49:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:53.283 13:49:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:53.283 13:49:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:53.283 13:49:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:53.283 13:49:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.283 13:49:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.283 13:49:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.283 13:49:44 -- paths/export.sh@5 -- # export PATH 00:07:53.283 13:49:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:53.283 13:49:44 -- nvmf/common.sh@46 -- # : 0 00:07:53.283 13:49:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:53.283 13:49:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:53.283 13:49:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:53.283 13:49:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:53.283 13:49:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:53.283 13:49:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:53.283 13:49:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:53.283 13:49:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:53.283 13:49:44 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:53.283 13:49:44 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:53.283 13:49:44 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:53.283 13:49:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:53.283 13:49:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:53.283 13:49:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:53.283 13:49:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:53.283 13:49:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:53.283 13:49:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:53.283 13:49:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:53.283 13:49:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:53.283 13:49:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:53.283 13:49:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:53.283 13:49:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:53.283 13:49:44 -- common/autotest_common.sh@10 -- # set +x 00:07:58.560 13:49:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:58.560 13:49:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:58.560 13:49:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:58.560 13:49:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:58.560 13:49:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:58.560 13:49:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:58.560 13:49:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:58.560 13:49:49 -- nvmf/common.sh@294 -- # net_devs=() 00:07:58.560 13:49:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:58.560 13:49:49 -- nvmf/common.sh@295 -- # e810=() 00:07:58.560 13:49:49 -- nvmf/common.sh@295 -- # local -ga e810 00:07:58.560 13:49:49 -- nvmf/common.sh@296 -- # x722=() 00:07:58.560 13:49:49 -- nvmf/common.sh@296 -- # local -ga x722 00:07:58.560 13:49:49 -- nvmf/common.sh@297 -- # mlx=() 00:07:58.560 13:49:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:58.560 13:49:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:58.560 13:49:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:58.560 13:49:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:58.560 13:49:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:58.560 13:49:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:58.560 13:49:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:58.560 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:58.560 13:49:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:58.560 13:49:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:58.560 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:58.560 13:49:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:58.560 13:49:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:58.560 13:49:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:58.560 13:49:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.560 13:49:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:58.561 13:49:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.561 13:49:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:58.561 Found net devices under 0000:86:00.0: cvl_0_0 00:07:58.561 13:49:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.561 13:49:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:58.561 13:49:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:58.561 13:49:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:58.561 13:49:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:58.561 13:49:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:58.561 Found net devices under 0000:86:00.1: cvl_0_1 00:07:58.561 13:49:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:58.561 13:49:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:58.561 13:49:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:58.561 13:49:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:58.561 13:49:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:58.561 13:49:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:58.561 13:49:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.561 13:49:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.561 13:49:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:58.561 13:49:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:58.561 13:49:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:58.561 13:49:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:58.561 13:49:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:58.561 13:49:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:58.561 13:49:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.561 13:49:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:58.561 13:49:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:58.561 13:49:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:58.561 13:49:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:58.561 13:49:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:58.561 13:49:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:58.561 13:49:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:58.561 13:49:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:58.561 13:49:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:58.561 13:49:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:58.821 13:49:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:58.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:07:58.821 00:07:58.821 --- 10.0.0.2 ping statistics --- 00:07:58.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.821 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:58.821 13:49:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:58.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:07:58.821 00:07:58.821 --- 10.0.0.1 ping statistics --- 00:07:58.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.821 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:07:58.821 13:49:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.821 13:49:49 -- nvmf/common.sh@410 -- # return 0 00:07:58.821 13:49:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:58.821 13:49:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.821 13:49:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:58.821 13:49:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:58.821 13:49:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.821 13:49:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:58.821 13:49:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:58.821 13:49:49 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:58.821 13:49:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:58.821 13:49:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.821 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:58.821 13:49:49 -- nvmf/common.sh@469 -- # nvmfpid=3125508 00:07:58.821 13:49:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:58.821 13:49:49 -- nvmf/common.sh@470 -- # waitforlisten 3125508 00:07:58.821 13:49:49 -- common/autotest_common.sh@819 -- # '[' -z 3125508 ']' 00:07:58.821 13:49:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.821 13:49:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.821 13:49:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.821 13:49:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.821 13:49:49 -- common/autotest_common.sh@10 -- # set +x 00:07:58.821 [2024-07-23 13:49:49.662712] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:58.821 [2024-07-23 13:49:49.662751] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:58.821 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.821 [2024-07-23 13:49:49.722897] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:58.821 [2024-07-23 13:49:49.798362] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.821 [2024-07-23 13:49:49.798477] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:58.821 [2024-07-23 13:49:49.798485] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:58.821 [2024-07-23 13:49:49.798491] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:58.821 [2024-07-23 13:49:49.798532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:58.821 [2024-07-23 13:49:49.798560] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:58.821 [2024-07-23 13:49:49.798644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:58.821 [2024-07-23 13:49:49.798646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.757 13:49:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.757 13:49:50 -- common/autotest_common.sh@852 -- # return 0 00:07:59.757 13:49:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:59.757 13:49:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:59.757 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 13:49:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:59.757 13:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.757 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 [2024-07-23 13:49:50.498261] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.757 13:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:59.757 13:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.757 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 13:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:59.757 13:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.757 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 13:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.757 13:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.757 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 13:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.757 13:49:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.757 13:49:50 -- common/autotest_common.sh@10 -- # set +x 00:07:59.757 [2024-07-23 13:49:50.550232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:59.757 13:49:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:59.757 13:49:50 -- target/connect_disconnect.sh@34 -- # set +x 00:08:02.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:04.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:06.727 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:09.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:11.167 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.705 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.610 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:18.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:25.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.632 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.045 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.951 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.921 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:14.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.201 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.182 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.140 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.652 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.435 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.777 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.312 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:55.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.763 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.130 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.579 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.116 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.701 13:53:38 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:47.701 13:53:38 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:47.701 13:53:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:47.701 13:53:38 -- nvmf/common.sh@116 -- # sync 00:11:47.701 13:53:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:47.701 13:53:38 -- nvmf/common.sh@119 -- # set +e 00:11:47.701 13:53:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:47.701 13:53:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:47.701 rmmod nvme_tcp 00:11:47.701 rmmod nvme_fabrics 00:11:47.701 rmmod nvme_keyring 00:11:47.701 13:53:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:47.701 13:53:38 -- nvmf/common.sh@123 -- # set -e 00:11:47.701 13:53:38 -- nvmf/common.sh@124 -- # return 0 00:11:47.701 13:53:38 -- nvmf/common.sh@477 -- # '[' -n 3125508 ']' 00:11:47.701 13:53:38 -- nvmf/common.sh@478 -- # killprocess 3125508 00:11:47.701 13:53:38 -- common/autotest_common.sh@926 -- # '[' -z 3125508 ']' 00:11:47.701 13:53:38 -- common/autotest_common.sh@930 -- # kill -0 3125508 00:11:47.701 13:53:38 -- common/autotest_common.sh@931 -- # uname 00:11:47.701 13:53:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.701 13:53:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3125508 00:11:47.701 13:53:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:47.701 13:53:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:47.701 13:53:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3125508' 00:11:47.701 killing process with pid 3125508 00:11:47.701 13:53:38 -- common/autotest_common.sh@945 -- # kill 3125508 00:11:47.701 13:53:38 -- common/autotest_common.sh@950 -- # wait 3125508 00:11:47.961 13:53:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:47.961 13:53:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:47.961 13:53:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:47.961 13:53:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.961 13:53:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:47.961 13:53:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.961 13:53:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:47.961 13:53:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.871 13:53:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:49.871 00:11:49.871 real 3m56.880s 00:11:49.871 user 15m8.781s 00:11:49.871 sys 0m17.404s 00:11:49.871 13:53:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:49.871 13:53:40 -- common/autotest_common.sh@10 -- # set +x 00:11:49.871 ************************************ 00:11:49.871 END TEST nvmf_connect_disconnect 00:11:49.871 ************************************ 00:11:50.131 13:53:40 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:50.131 13:53:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:50.131 13:53:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:50.131 13:53:40 -- common/autotest_common.sh@10 -- # set +x 00:11:50.131 ************************************ 00:11:50.131 START TEST nvmf_multitarget 00:11:50.131 ************************************ 00:11:50.131 13:53:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:50.131 * Looking for test storage... 00:11:50.131 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.131 13:53:40 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.131 13:53:40 -- nvmf/common.sh@7 -- # uname -s 00:11:50.131 13:53:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.131 13:53:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.131 13:53:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.131 13:53:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.131 13:53:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.131 13:53:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.131 13:53:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.131 13:53:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.131 13:53:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.131 13:53:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.131 13:53:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.131 13:53:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:50.131 13:53:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.131 13:53:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.131 13:53:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.131 13:53:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.131 13:53:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.131 13:53:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.131 13:53:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.131 13:53:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.131 13:53:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.131 13:53:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.131 13:53:41 -- paths/export.sh@5 -- # export PATH 00:11:50.131 13:53:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.131 13:53:41 -- nvmf/common.sh@46 -- # : 0 00:11:50.131 13:53:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:50.131 13:53:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:50.131 13:53:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:50.131 13:53:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.131 13:53:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.131 13:53:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:50.131 13:53:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:50.131 13:53:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:50.131 13:53:41 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:50.131 13:53:41 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:50.131 13:53:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:50.131 13:53:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.131 13:53:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:50.131 13:53:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:50.131 13:53:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:50.132 13:53:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.132 13:53:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.132 13:53:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.132 13:53:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:50.132 13:53:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:50.132 13:53:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:50.132 13:53:41 -- common/autotest_common.sh@10 -- # set +x 00:11:55.409 13:53:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:55.409 13:53:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:55.409 13:53:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:55.409 13:53:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:55.409 13:53:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:55.409 13:53:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:55.409 13:53:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:55.409 13:53:46 -- nvmf/common.sh@294 -- # net_devs=() 00:11:55.409 13:53:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:55.409 13:53:46 -- nvmf/common.sh@295 -- # e810=() 00:11:55.409 13:53:46 -- nvmf/common.sh@295 -- # local -ga e810 00:11:55.409 13:53:46 -- nvmf/common.sh@296 -- # x722=() 00:11:55.409 13:53:46 -- nvmf/common.sh@296 -- # local -ga x722 00:11:55.409 13:53:46 -- nvmf/common.sh@297 -- # mlx=() 00:11:55.409 13:53:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:55.409 13:53:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.409 13:53:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:55.409 13:53:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:55.409 13:53:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:55.409 13:53:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:55.409 13:53:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:55.409 13:53:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:55.410 13:53:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:55.410 13:53:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:55.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:55.410 13:53:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:55.410 13:53:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:55.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:55.410 13:53:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:55.410 13:53:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:55.410 13:53:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.410 13:53:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:55.410 13:53:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.410 13:53:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:55.410 Found net devices under 0000:86:00.0: cvl_0_0 00:11:55.410 13:53:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.410 13:53:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:55.410 13:53:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.410 13:53:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:55.410 13:53:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.410 13:53:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:55.410 Found net devices under 0000:86:00.1: cvl_0_1 00:11:55.410 13:53:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.410 13:53:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:55.410 13:53:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:55.410 13:53:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:55.410 13:53:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:55.410 13:53:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:55.410 13:53:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:55.410 13:53:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:55.410 13:53:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:55.410 13:53:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:55.410 13:53:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:55.410 13:53:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:55.410 13:53:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:55.410 13:53:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:55.410 13:53:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:55.410 13:53:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:55.410 13:53:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:55.410 13:53:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:55.670 13:53:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:55.670 13:53:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:55.670 13:53:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:55.670 13:53:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:55.670 13:53:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:55.670 13:53:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:55.670 13:53:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:55.670 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:55.670 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:11:55.670 00:11:55.670 --- 10.0.0.2 ping statistics --- 00:11:55.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.670 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:11:55.670 13:53:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:55.670 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:55.670 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:11:55.670 00:11:55.670 --- 10.0.0.1 ping statistics --- 00:11:55.670 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:55.670 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:11:55.670 13:53:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:55.670 13:53:46 -- nvmf/common.sh@410 -- # return 0 00:11:55.670 13:53:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:55.670 13:53:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:55.670 13:53:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:55.670 13:53:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:55.670 13:53:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:55.670 13:53:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:55.670 13:53:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:55.670 13:53:46 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:55.670 13:53:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:55.670 13:53:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:55.670 13:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:55.670 13:53:46 -- nvmf/common.sh@469 -- # nvmfpid=3169274 00:11:55.670 13:53:46 -- nvmf/common.sh@470 -- # waitforlisten 3169274 00:11:55.670 13:53:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.670 13:53:46 -- common/autotest_common.sh@819 -- # '[' -z 3169274 ']' 00:11:55.670 13:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.670 13:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:55.670 13:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.670 13:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:55.670 13:53:46 -- common/autotest_common.sh@10 -- # set +x 00:11:55.670 [2024-07-23 13:53:46.675950] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:55.670 [2024-07-23 13:53:46.675989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.930 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.930 [2024-07-23 13:53:46.732483] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.930 [2024-07-23 13:53:46.809071] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:55.930 [2024-07-23 13:53:46.809183] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.930 [2024-07-23 13:53:46.809191] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.930 [2024-07-23 13:53:46.809198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.930 [2024-07-23 13:53:46.809232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.930 [2024-07-23 13:53:46.809332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.930 [2024-07-23 13:53:46.809395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:55.930 [2024-07-23 13:53:46.809396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.498 13:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:56.498 13:53:47 -- common/autotest_common.sh@852 -- # return 0 00:11:56.498 13:53:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:56.498 13:53:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:56.498 13:53:47 -- common/autotest_common.sh@10 -- # set +x 00:11:56.757 13:53:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.757 13:53:47 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:56.757 13:53:47 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:56.757 13:53:47 -- target/multitarget.sh@21 -- # jq length 00:11:56.757 13:53:47 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:56.757 13:53:47 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:56.757 "nvmf_tgt_1" 00:11:56.757 13:53:47 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:57.017 "nvmf_tgt_2" 00:11:57.017 13:53:47 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:57.017 13:53:47 -- target/multitarget.sh@28 -- # jq length 00:11:57.017 13:53:47 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:57.017 13:53:47 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:57.017 true 00:11:57.017 13:53:48 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:57.276 true 00:11:57.276 13:53:48 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:57.276 13:53:48 -- target/multitarget.sh@35 -- # jq length 00:11:57.276 13:53:48 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:57.276 13:53:48 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:57.276 13:53:48 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:57.276 13:53:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:57.276 13:53:48 -- nvmf/common.sh@116 -- # sync 00:11:57.276 13:53:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:57.276 13:53:48 -- nvmf/common.sh@119 -- # set +e 00:11:57.276 13:53:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:57.276 13:53:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:57.276 rmmod nvme_tcp 00:11:57.276 rmmod nvme_fabrics 00:11:57.276 rmmod nvme_keyring 00:11:57.276 13:53:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:57.276 13:53:48 -- nvmf/common.sh@123 -- # set -e 00:11:57.276 13:53:48 -- nvmf/common.sh@124 -- # return 0 00:11:57.276 13:53:48 -- nvmf/common.sh@477 -- # '[' -n 3169274 ']' 00:11:57.276 13:53:48 -- nvmf/common.sh@478 -- # killprocess 3169274 00:11:57.276 13:53:48 -- common/autotest_common.sh@926 -- # '[' -z 3169274 ']' 00:11:57.276 13:53:48 -- common/autotest_common.sh@930 -- # kill -0 3169274 00:11:57.276 13:53:48 -- common/autotest_common.sh@931 -- # uname 00:11:57.536 13:53:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:57.536 13:53:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3169274 00:11:57.536 13:53:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:57.536 13:53:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:57.536 13:53:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3169274' 00:11:57.536 killing process with pid 3169274 00:11:57.536 13:53:48 -- common/autotest_common.sh@945 -- # kill 3169274 00:11:57.536 13:53:48 -- common/autotest_common.sh@950 -- # wait 3169274 00:11:57.536 13:53:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:57.536 13:53:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:57.536 13:53:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:57.536 13:53:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:57.536 13:53:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:57.536 13:53:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.536 13:53:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.536 13:53:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.074 13:53:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:00.074 00:12:00.074 real 0m9.695s 00:12:00.074 user 0m9.144s 00:12:00.074 sys 0m4.628s 00:12:00.074 13:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:00.074 13:53:50 -- common/autotest_common.sh@10 -- # set +x 00:12:00.074 ************************************ 00:12:00.074 END TEST nvmf_multitarget 00:12:00.074 ************************************ 00:12:00.074 13:53:50 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:00.074 13:53:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:00.074 13:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:00.074 13:53:50 -- common/autotest_common.sh@10 -- # set +x 00:12:00.074 ************************************ 00:12:00.074 START TEST nvmf_rpc 00:12:00.074 ************************************ 00:12:00.074 13:53:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:00.074 * Looking for test storage... 00:12:00.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.074 13:53:50 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.074 13:53:50 -- nvmf/common.sh@7 -- # uname -s 00:12:00.074 13:53:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.074 13:53:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.074 13:53:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.074 13:53:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.074 13:53:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:00.074 13:53:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:00.074 13:53:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.074 13:53:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:00.074 13:53:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.074 13:53:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:00.074 13:53:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.074 13:53:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.074 13:53:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.074 13:53:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:00.074 13:53:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:00.075 13:53:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.075 13:53:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.075 13:53:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.075 13:53:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.075 13:53:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.075 13:53:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.075 13:53:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.075 13:53:50 -- paths/export.sh@5 -- # export PATH 00:12:00.075 13:53:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.075 13:53:50 -- nvmf/common.sh@46 -- # : 0 00:12:00.075 13:53:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:00.075 13:53:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:00.075 13:53:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:00.075 13:53:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.075 13:53:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.075 13:53:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:00.075 13:53:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:00.075 13:53:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:00.075 13:53:50 -- target/rpc.sh@11 -- # loops=5 00:12:00.075 13:53:50 -- target/rpc.sh@23 -- # nvmftestinit 00:12:00.075 13:53:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:00.075 13:53:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.075 13:53:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:00.075 13:53:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:00.075 13:53:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:00.075 13:53:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.075 13:53:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:00.075 13:53:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:00.075 13:53:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:00.075 13:53:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:00.075 13:53:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:00.075 13:53:50 -- common/autotest_common.sh@10 -- # set +x 00:12:05.348 13:53:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:05.348 13:53:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:05.348 13:53:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:05.348 13:53:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:05.348 13:53:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:05.348 13:53:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:05.348 13:53:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:05.348 13:53:56 -- nvmf/common.sh@294 -- # net_devs=() 00:12:05.348 13:53:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:05.348 13:53:56 -- nvmf/common.sh@295 -- # e810=() 00:12:05.348 13:53:56 -- nvmf/common.sh@295 -- # local -ga e810 00:12:05.348 13:53:56 -- nvmf/common.sh@296 -- # x722=() 00:12:05.348 13:53:56 -- nvmf/common.sh@296 -- # local -ga x722 00:12:05.348 13:53:56 -- nvmf/common.sh@297 -- # mlx=() 00:12:05.348 13:53:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:05.348 13:53:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.348 13:53:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:05.348 13:53:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:05.348 13:53:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:05.348 13:53:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:05.348 13:53:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:05.348 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:05.348 13:53:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:05.348 13:53:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:05.348 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:05.348 13:53:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:05.348 13:53:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:05.348 13:53:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.348 13:53:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:05.348 13:53:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.348 13:53:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:05.348 Found net devices under 0000:86:00.0: cvl_0_0 00:12:05.348 13:53:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.348 13:53:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:05.348 13:53:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.348 13:53:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:05.348 13:53:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.348 13:53:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:05.348 Found net devices under 0000:86:00.1: cvl_0_1 00:12:05.348 13:53:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.348 13:53:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:05.348 13:53:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:05.348 13:53:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:05.348 13:53:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:05.348 13:53:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.348 13:53:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.348 13:53:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.348 13:53:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:05.348 13:53:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.348 13:53:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.348 13:53:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:05.348 13:53:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.348 13:53:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.348 13:53:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:05.348 13:53:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:05.348 13:53:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.348 13:53:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.348 13:53:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.348 13:53:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.348 13:53:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:05.348 13:53:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.348 13:53:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.348 13:53:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.348 13:53:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:05.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:12:05.608 00:12:05.608 --- 10.0.0.2 ping statistics --- 00:12:05.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.608 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:12:05.608 13:53:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:12:05.608 00:12:05.608 --- 10.0.0.1 ping statistics --- 00:12:05.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.608 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:12:05.608 13:53:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.608 13:53:56 -- nvmf/common.sh@410 -- # return 0 00:12:05.608 13:53:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:05.608 13:53:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.608 13:53:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:05.608 13:53:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:05.608 13:53:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.608 13:53:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:05.608 13:53:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:05.608 13:53:56 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:05.608 13:53:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:05.608 13:53:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:05.608 13:53:56 -- common/autotest_common.sh@10 -- # set +x 00:12:05.608 13:53:56 -- nvmf/common.sh@469 -- # nvmfpid=3173081 00:12:05.608 13:53:56 -- nvmf/common.sh@470 -- # waitforlisten 3173081 00:12:05.608 13:53:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.608 13:53:56 -- common/autotest_common.sh@819 -- # '[' -z 3173081 ']' 00:12:05.608 13:53:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.608 13:53:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:05.608 13:53:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.608 13:53:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:05.608 13:53:56 -- common/autotest_common.sh@10 -- # set +x 00:12:05.608 [2024-07-23 13:53:56.451669] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:05.608 [2024-07-23 13:53:56.451709] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.608 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.608 [2024-07-23 13:53:56.509024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.608 [2024-07-23 13:53:56.579649] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.608 [2024-07-23 13:53:56.579759] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.608 [2024-07-23 13:53:56.579766] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.608 [2024-07-23 13:53:56.579772] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.608 [2024-07-23 13:53:56.579863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.608 [2024-07-23 13:53:56.580006] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.608 [2024-07-23 13:53:56.580084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.608 [2024-07-23 13:53:56.580089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.545 13:53:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:06.545 13:53:57 -- common/autotest_common.sh@852 -- # return 0 00:12:06.545 13:53:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:06.545 13:53:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:06.545 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.545 13:53:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.545 13:53:57 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:06.545 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.545 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.545 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.545 13:53:57 -- target/rpc.sh@26 -- # stats='{ 00:12:06.545 "tick_rate": 2300000000, 00:12:06.545 "poll_groups": [ 00:12:06.545 { 00:12:06.545 "name": "nvmf_tgt_poll_group_0", 00:12:06.545 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [] 00:12:06.546 }, 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_1", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [] 00:12:06.546 }, 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_2", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [] 00:12:06.546 }, 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_3", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [] 00:12:06.546 } 00:12:06.546 ] 00:12:06.546 }' 00:12:06.546 13:53:57 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:06.546 13:53:57 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:06.546 13:53:57 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:06.546 13:53:57 -- target/rpc.sh@15 -- # wc -l 00:12:06.546 13:53:57 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:06.546 13:53:57 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:06.546 13:53:57 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:06.546 13:53:57 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.546 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.546 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.546 [2024-07-23 13:53:57.400731] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.546 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.546 13:53:57 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:06.546 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.546 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.546 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.546 13:53:57 -- target/rpc.sh@33 -- # stats='{ 00:12:06.546 "tick_rate": 2300000000, 00:12:06.546 "poll_groups": [ 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_0", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [ 00:12:06.546 { 00:12:06.546 "trtype": "TCP" 00:12:06.546 } 00:12:06.546 ] 00:12:06.546 }, 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_1", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [ 00:12:06.546 { 00:12:06.546 "trtype": "TCP" 00:12:06.546 } 00:12:06.546 ] 00:12:06.546 }, 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_2", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [ 00:12:06.546 { 00:12:06.546 "trtype": "TCP" 00:12:06.546 } 00:12:06.546 ] 00:12:06.546 }, 00:12:06.546 { 00:12:06.546 "name": "nvmf_tgt_poll_group_3", 00:12:06.546 "admin_qpairs": 0, 00:12:06.546 "io_qpairs": 0, 00:12:06.546 "current_admin_qpairs": 0, 00:12:06.546 "current_io_qpairs": 0, 00:12:06.546 "pending_bdev_io": 0, 00:12:06.546 "completed_nvme_io": 0, 00:12:06.546 "transports": [ 00:12:06.546 { 00:12:06.546 "trtype": "TCP" 00:12:06.546 } 00:12:06.546 ] 00:12:06.546 } 00:12:06.546 ] 00:12:06.546 }' 00:12:06.546 13:53:57 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:06.546 13:53:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:06.546 13:53:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:06.546 13:53:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.546 13:53:57 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:06.546 13:53:57 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:06.546 13:53:57 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:06.546 13:53:57 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:06.546 13:53:57 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.546 13:53:57 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:06.546 13:53:57 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:06.546 13:53:57 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:06.546 13:53:57 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:06.546 13:53:57 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:06.546 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.546 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.546 Malloc1 00:12:06.546 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.546 13:53:57 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.546 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.546 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.546 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.546 13:53:57 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.546 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.546 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.546 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.546 13:53:57 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:06.546 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.546 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.805 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.805 13:53:57 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.805 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.805 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.805 [2024-07-23 13:53:57.572765] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.805 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.805 13:53:57 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:06.805 13:53:57 -- common/autotest_common.sh@640 -- # local es=0 00:12:06.806 13:53:57 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:06.806 13:53:57 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:06.806 13:53:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:06.806 13:53:57 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:06.806 13:53:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:06.806 13:53:57 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:06.806 13:53:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:06.806 13:53:57 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:06.806 13:53:57 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:06.806 13:53:57 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:06.806 [2024-07-23 13:53:57.597267] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:06.806 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:06.806 could not add new controller: failed to write to nvme-fabrics device 00:12:06.806 13:53:57 -- common/autotest_common.sh@643 -- # es=1 00:12:06.806 13:53:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:06.806 13:53:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:06.806 13:53:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:06.806 13:53:57 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:06.806 13:53:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:06.806 13:53:57 -- common/autotest_common.sh@10 -- # set +x 00:12:06.806 13:53:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:06.806 13:53:57 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:07.803 13:53:58 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:07.803 13:53:58 -- common/autotest_common.sh@1177 -- # local i=0 00:12:07.803 13:53:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:07.803 13:53:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:07.803 13:53:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:10.338 13:54:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:10.338 13:54:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:10.338 13:54:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:10.338 13:54:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:10.338 13:54:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:10.338 13:54:00 -- common/autotest_common.sh@1187 -- # return 0 00:12:10.338 13:54:00 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:10.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.338 13:54:00 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:10.338 13:54:00 -- common/autotest_common.sh@1198 -- # local i=0 00:12:10.338 13:54:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:10.338 13:54:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.338 13:54:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:10.338 13:54:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:10.338 13:54:00 -- common/autotest_common.sh@1210 -- # return 0 00:12:10.338 13:54:00 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:10.338 13:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.338 13:54:00 -- common/autotest_common.sh@10 -- # set +x 00:12:10.338 13:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.338 13:54:00 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.338 13:54:00 -- common/autotest_common.sh@640 -- # local es=0 00:12:10.338 13:54:00 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.338 13:54:00 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:10.338 13:54:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:10.338 13:54:00 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:10.338 13:54:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:10.338 13:54:00 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:10.338 13:54:00 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:10.338 13:54:00 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:10.338 13:54:00 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:10.338 13:54:00 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:10.338 [2024-07-23 13:54:00.930668] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:10.338 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:10.338 could not add new controller: failed to write to nvme-fabrics device 00:12:10.339 13:54:00 -- common/autotest_common.sh@643 -- # es=1 00:12:10.339 13:54:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:10.339 13:54:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:10.339 13:54:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:10.339 13:54:00 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:10.339 13:54:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:10.339 13:54:00 -- common/autotest_common.sh@10 -- # set +x 00:12:10.339 13:54:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:10.339 13:54:00 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:11.276 13:54:02 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:11.276 13:54:02 -- common/autotest_common.sh@1177 -- # local i=0 00:12:11.276 13:54:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:11.276 13:54:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:11.276 13:54:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:13.180 13:54:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:13.180 13:54:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:13.180 13:54:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:13.180 13:54:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:13.180 13:54:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:13.180 13:54:04 -- common/autotest_common.sh@1187 -- # return 0 00:12:13.180 13:54:04 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.180 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.180 13:54:04 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.180 13:54:04 -- common/autotest_common.sh@1198 -- # local i=0 00:12:13.180 13:54:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:13.180 13:54:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.439 13:54:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:13.439 13:54:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.439 13:54:04 -- common/autotest_common.sh@1210 -- # return 0 00:12:13.439 13:54:04 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:13.439 13:54:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.439 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.439 13:54:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.439 13:54:04 -- target/rpc.sh@81 -- # seq 1 5 00:12:13.439 13:54:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:13.439 13:54:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:13.439 13:54:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.439 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.439 13:54:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.439 13:54:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.439 13:54:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.439 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.439 [2024-07-23 13:54:04.243902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.439 13:54:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.439 13:54:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:13.439 13:54:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.439 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.439 13:54:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.439 13:54:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:13.439 13:54:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:13.439 13:54:04 -- common/autotest_common.sh@10 -- # set +x 00:12:13.439 13:54:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:13.439 13:54:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:14.377 13:54:05 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:14.377 13:54:05 -- common/autotest_common.sh@1177 -- # local i=0 00:12:14.377 13:54:05 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:14.377 13:54:05 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:14.377 13:54:05 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:16.913 13:54:07 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:16.913 13:54:07 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:16.913 13:54:07 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:16.913 13:54:07 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:16.913 13:54:07 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:16.913 13:54:07 -- common/autotest_common.sh@1187 -- # return 0 00:12:16.913 13:54:07 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:16.913 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.913 13:54:07 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:16.913 13:54:07 -- common/autotest_common.sh@1198 -- # local i=0 00:12:16.913 13:54:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:16.913 13:54:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.913 13:54:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:16.913 13:54:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:16.913 13:54:07 -- common/autotest_common.sh@1210 -- # return 0 00:12:16.913 13:54:07 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:16.913 13:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.913 13:54:07 -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 13:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.913 13:54:07 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:16.913 13:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.913 13:54:07 -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 13:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.913 13:54:07 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:16.913 13:54:07 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:16.913 13:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.913 13:54:07 -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 13:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.913 13:54:07 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.913 13:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.913 13:54:07 -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 [2024-07-23 13:54:07.519091] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.913 13:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.913 13:54:07 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:16.913 13:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.913 13:54:07 -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 13:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.913 13:54:07 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:16.913 13:54:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:16.913 13:54:07 -- common/autotest_common.sh@10 -- # set +x 00:12:16.913 13:54:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:16.913 13:54:07 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.851 13:54:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.851 13:54:08 -- common/autotest_common.sh@1177 -- # local i=0 00:12:17.851 13:54:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.851 13:54:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:17.851 13:54:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:19.756 13:54:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:19.756 13:54:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:19.756 13:54:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.756 13:54:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:19.756 13:54:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.756 13:54:10 -- common/autotest_common.sh@1187 -- # return 0 00:12:19.756 13:54:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.756 13:54:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.756 13:54:10 -- common/autotest_common.sh@1198 -- # local i=0 00:12:19.756 13:54:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:19.756 13:54:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.756 13:54:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:19.756 13:54:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.756 13:54:10 -- common/autotest_common.sh@1210 -- # return 0 00:12:19.756 13:54:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:19.756 13:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.756 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:12:19.756 13:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:19.756 13:54:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.756 13:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:19.756 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 13:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.016 13:54:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.016 13:54:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.016 13:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.016 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 13:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.016 13:54:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.016 13:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.016 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 [2024-07-23 13:54:10.793577] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.016 13:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.016 13:54:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.016 13:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.016 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 13:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.016 13:54:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.016 13:54:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:20.016 13:54:10 -- common/autotest_common.sh@10 -- # set +x 00:12:20.016 13:54:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:20.016 13:54:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:20.952 13:54:11 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:20.952 13:54:11 -- common/autotest_common.sh@1177 -- # local i=0 00:12:20.952 13:54:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:20.952 13:54:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:20.952 13:54:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:23.487 13:54:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:23.487 13:54:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:23.487 13:54:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.487 13:54:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:23.487 13:54:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.487 13:54:13 -- common/autotest_common.sh@1187 -- # return 0 00:12:23.487 13:54:13 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.487 13:54:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.487 13:54:13 -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.487 13:54:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:23.487 13:54:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.487 13:54:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:23.487 13:54:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.487 13:54:13 -- common/autotest_common.sh@1210 -- # return 0 00:12:23.487 13:54:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.487 13:54:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.487 13:54:13 -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 13:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.487 13:54:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.487 13:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.487 13:54:14 -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 13:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.487 13:54:14 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.487 13:54:14 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.487 13:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.487 13:54:14 -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 13:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.487 13:54:14 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.487 13:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.487 13:54:14 -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 [2024-07-23 13:54:14.025556] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.487 13:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.487 13:54:14 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.487 13:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.487 13:54:14 -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 13:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.487 13:54:14 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.487 13:54:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:23.487 13:54:14 -- common/autotest_common.sh@10 -- # set +x 00:12:23.487 13:54:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:23.487 13:54:14 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.425 13:54:15 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.425 13:54:15 -- common/autotest_common.sh@1177 -- # local i=0 00:12:24.425 13:54:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.425 13:54:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:24.425 13:54:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:26.331 13:54:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:26.332 13:54:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:26.332 13:54:17 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.332 13:54:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:26.332 13:54:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.332 13:54:17 -- common/autotest_common.sh@1187 -- # return 0 00:12:26.332 13:54:17 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.332 13:54:17 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.332 13:54:17 -- common/autotest_common.sh@1198 -- # local i=0 00:12:26.332 13:54:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:26.332 13:54:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.332 13:54:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:26.332 13:54:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.332 13:54:17 -- common/autotest_common.sh@1210 -- # return 0 00:12:26.332 13:54:17 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.332 13:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.332 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.332 13:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.332 13:54:17 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.332 13:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.332 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.332 13:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.332 13:54:17 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.332 13:54:17 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.332 13:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.332 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.332 13:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.332 13:54:17 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.332 13:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.332 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.332 [2024-07-23 13:54:17.295595] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.332 13:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.332 13:54:17 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.332 13:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.332 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.332 13:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.332 13:54:17 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.332 13:54:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.332 13:54:17 -- common/autotest_common.sh@10 -- # set +x 00:12:26.332 13:54:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.332 13:54:17 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.757 13:54:18 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.757 13:54:18 -- common/autotest_common.sh@1177 -- # local i=0 00:12:27.757 13:54:18 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.757 13:54:18 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:27.757 13:54:18 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:29.672 13:54:20 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:29.672 13:54:20 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:29.672 13:54:20 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.672 13:54:20 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:29.672 13:54:20 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.672 13:54:20 -- common/autotest_common.sh@1187 -- # return 0 00:12:29.672 13:54:20 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.672 13:54:20 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.672 13:54:20 -- common/autotest_common.sh@1198 -- # local i=0 00:12:29.672 13:54:20 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:29.672 13:54:20 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.672 13:54:20 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:29.672 13:54:20 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.672 13:54:20 -- common/autotest_common.sh@1210 -- # return 0 00:12:29.672 13:54:20 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@99 -- # seq 1 5 00:12:29.672 13:54:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.672 13:54:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 [2024-07-23 13:54:20.618449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.672 13:54:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.672 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.672 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.672 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.672 13:54:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.673 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.673 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.673 [2024-07-23 13:54:20.666565] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.673 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.673 13:54:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.673 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.673 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.673 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.673 13:54:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.673 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.673 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.673 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.673 13:54:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.673 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.673 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.932 13:54:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 [2024-07-23 13:54:20.714705] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.932 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.932 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.932 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.932 13:54:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.933 13:54:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 [2024-07-23 13:54:20.766901] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:29.933 13:54:20 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 [2024-07-23 13:54:20.815076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:29.933 13:54:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:29.933 13:54:20 -- common/autotest_common.sh@10 -- # set +x 00:12:29.933 13:54:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:29.933 13:54:20 -- target/rpc.sh@110 -- # stats='{ 00:12:29.933 "tick_rate": 2300000000, 00:12:29.933 "poll_groups": [ 00:12:29.933 { 00:12:29.933 "name": "nvmf_tgt_poll_group_0", 00:12:29.933 "admin_qpairs": 2, 00:12:29.933 "io_qpairs": 168, 00:12:29.933 "current_admin_qpairs": 0, 00:12:29.933 "current_io_qpairs": 0, 00:12:29.933 "pending_bdev_io": 0, 00:12:29.933 "completed_nvme_io": 269, 00:12:29.933 "transports": [ 00:12:29.933 { 00:12:29.933 "trtype": "TCP" 00:12:29.933 } 00:12:29.933 ] 00:12:29.933 }, 00:12:29.933 { 00:12:29.933 "name": "nvmf_tgt_poll_group_1", 00:12:29.933 "admin_qpairs": 2, 00:12:29.933 "io_qpairs": 168, 00:12:29.933 "current_admin_qpairs": 0, 00:12:29.933 "current_io_qpairs": 0, 00:12:29.933 "pending_bdev_io": 0, 00:12:29.933 "completed_nvme_io": 317, 00:12:29.933 "transports": [ 00:12:29.933 { 00:12:29.933 "trtype": "TCP" 00:12:29.933 } 00:12:29.933 ] 00:12:29.933 }, 00:12:29.933 { 00:12:29.933 "name": "nvmf_tgt_poll_group_2", 00:12:29.933 "admin_qpairs": 1, 00:12:29.933 "io_qpairs": 168, 00:12:29.933 "current_admin_qpairs": 0, 00:12:29.933 "current_io_qpairs": 0, 00:12:29.933 "pending_bdev_io": 0, 00:12:29.933 "completed_nvme_io": 218, 00:12:29.933 "transports": [ 00:12:29.933 { 00:12:29.933 "trtype": "TCP" 00:12:29.933 } 00:12:29.933 ] 00:12:29.933 }, 00:12:29.933 { 00:12:29.933 "name": "nvmf_tgt_poll_group_3", 00:12:29.933 "admin_qpairs": 2, 00:12:29.933 "io_qpairs": 168, 00:12:29.933 "current_admin_qpairs": 0, 00:12:29.933 "current_io_qpairs": 0, 00:12:29.933 "pending_bdev_io": 0, 00:12:29.933 "completed_nvme_io": 218, 00:12:29.933 "transports": [ 00:12:29.933 { 00:12:29.933 "trtype": "TCP" 00:12:29.933 } 00:12:29.933 ] 00:12:29.933 } 00:12:29.933 ] 00:12:29.933 }' 00:12:29.933 13:54:20 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:29.933 13:54:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:29.933 13:54:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:29.933 13:54:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:29.933 13:54:20 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:29.933 13:54:20 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:29.933 13:54:20 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:29.933 13:54:20 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:29.933 13:54:20 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:30.193 13:54:20 -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:30.194 13:54:20 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:30.194 13:54:20 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:30.194 13:54:20 -- target/rpc.sh@123 -- # nvmftestfini 00:12:30.194 13:54:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:30.194 13:54:20 -- nvmf/common.sh@116 -- # sync 00:12:30.194 13:54:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:30.194 13:54:20 -- nvmf/common.sh@119 -- # set +e 00:12:30.194 13:54:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:30.194 13:54:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:30.194 rmmod nvme_tcp 00:12:30.194 rmmod nvme_fabrics 00:12:30.194 rmmod nvme_keyring 00:12:30.194 13:54:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:30.194 13:54:21 -- nvmf/common.sh@123 -- # set -e 00:12:30.194 13:54:21 -- nvmf/common.sh@124 -- # return 0 00:12:30.194 13:54:21 -- nvmf/common.sh@477 -- # '[' -n 3173081 ']' 00:12:30.194 13:54:21 -- nvmf/common.sh@478 -- # killprocess 3173081 00:12:30.194 13:54:21 -- common/autotest_common.sh@926 -- # '[' -z 3173081 ']' 00:12:30.194 13:54:21 -- common/autotest_common.sh@930 -- # kill -0 3173081 00:12:30.194 13:54:21 -- common/autotest_common.sh@931 -- # uname 00:12:30.194 13:54:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:30.194 13:54:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3173081 00:12:30.194 13:54:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:30.194 13:54:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:30.194 13:54:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3173081' 00:12:30.194 killing process with pid 3173081 00:12:30.194 13:54:21 -- common/autotest_common.sh@945 -- # kill 3173081 00:12:30.194 13:54:21 -- common/autotest_common.sh@950 -- # wait 3173081 00:12:30.454 13:54:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:30.454 13:54:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:30.454 13:54:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:30.454 13:54:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:30.454 13:54:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:30.454 13:54:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.454 13:54:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.454 13:54:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.362 13:54:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:32.362 00:12:32.362 real 0m32.720s 00:12:32.362 user 1m40.028s 00:12:32.362 sys 0m5.772s 00:12:32.362 13:54:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:32.362 13:54:23 -- common/autotest_common.sh@10 -- # set +x 00:12:32.362 ************************************ 00:12:32.362 END TEST nvmf_rpc 00:12:32.362 ************************************ 00:12:32.621 13:54:23 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:32.621 13:54:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:32.621 13:54:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:32.621 13:54:23 -- common/autotest_common.sh@10 -- # set +x 00:12:32.621 ************************************ 00:12:32.621 START TEST nvmf_invalid 00:12:32.621 ************************************ 00:12:32.621 13:54:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:32.621 * Looking for test storage... 00:12:32.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:32.621 13:54:23 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.621 13:54:23 -- nvmf/common.sh@7 -- # uname -s 00:12:32.621 13:54:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.621 13:54:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.621 13:54:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.621 13:54:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.621 13:54:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.621 13:54:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.621 13:54:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.621 13:54:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.621 13:54:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.621 13:54:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.621 13:54:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.621 13:54:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:32.621 13:54:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.621 13:54:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.621 13:54:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.621 13:54:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:32.621 13:54:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.621 13:54:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.621 13:54:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.621 13:54:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.621 13:54:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.621 13:54:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.621 13:54:23 -- paths/export.sh@5 -- # export PATH 00:12:32.621 13:54:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.621 13:54:23 -- nvmf/common.sh@46 -- # : 0 00:12:32.621 13:54:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:32.621 13:54:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:32.621 13:54:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:32.621 13:54:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.621 13:54:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.621 13:54:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:32.621 13:54:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:32.621 13:54:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:32.621 13:54:23 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:32.622 13:54:23 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:32.622 13:54:23 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:32.622 13:54:23 -- target/invalid.sh@14 -- # target=foobar 00:12:32.622 13:54:23 -- target/invalid.sh@16 -- # RANDOM=0 00:12:32.622 13:54:23 -- target/invalid.sh@34 -- # nvmftestinit 00:12:32.622 13:54:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:32.622 13:54:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.622 13:54:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:32.622 13:54:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:32.622 13:54:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:32.622 13:54:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.622 13:54:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.622 13:54:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.622 13:54:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:32.622 13:54:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:32.622 13:54:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:32.622 13:54:23 -- common/autotest_common.sh@10 -- # set +x 00:12:37.895 13:54:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:37.895 13:54:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:37.895 13:54:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:37.895 13:54:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:37.895 13:54:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:37.895 13:54:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:37.895 13:54:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:37.895 13:54:28 -- nvmf/common.sh@294 -- # net_devs=() 00:12:37.895 13:54:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:37.895 13:54:28 -- nvmf/common.sh@295 -- # e810=() 00:12:37.895 13:54:28 -- nvmf/common.sh@295 -- # local -ga e810 00:12:37.895 13:54:28 -- nvmf/common.sh@296 -- # x722=() 00:12:37.895 13:54:28 -- nvmf/common.sh@296 -- # local -ga x722 00:12:37.895 13:54:28 -- nvmf/common.sh@297 -- # mlx=() 00:12:37.895 13:54:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:37.895 13:54:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.895 13:54:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:37.895 13:54:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:37.895 13:54:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:37.895 13:54:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:37.895 13:54:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:37.895 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:37.895 13:54:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:37.895 13:54:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:37.895 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:37.895 13:54:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:37.895 13:54:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:37.895 13:54:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:37.895 13:54:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.895 13:54:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:37.895 13:54:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.895 13:54:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:37.895 Found net devices under 0000:86:00.0: cvl_0_0 00:12:37.895 13:54:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.895 13:54:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:37.896 13:54:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.896 13:54:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:37.896 13:54:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.896 13:54:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:37.896 Found net devices under 0000:86:00.1: cvl_0_1 00:12:37.896 13:54:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.896 13:54:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:37.896 13:54:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:37.896 13:54:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:37.896 13:54:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:37.896 13:54:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:37.896 13:54:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.896 13:54:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.896 13:54:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:37.896 13:54:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:37.896 13:54:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:37.896 13:54:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:37.896 13:54:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:37.896 13:54:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:37.896 13:54:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.896 13:54:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:37.896 13:54:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:37.896 13:54:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:37.896 13:54:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:37.896 13:54:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:37.896 13:54:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:37.896 13:54:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:37.896 13:54:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:37.896 13:54:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:37.896 13:54:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:37.896 13:54:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:37.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:12:37.896 00:12:37.896 --- 10.0.0.2 ping statistics --- 00:12:37.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.896 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:37.896 13:54:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:37.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.303 ms 00:12:37.896 00:12:37.896 --- 10.0.0.1 ping statistics --- 00:12:37.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.896 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:12:37.896 13:54:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.896 13:54:28 -- nvmf/common.sh@410 -- # return 0 00:12:37.896 13:54:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.896 13:54:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.896 13:54:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.896 13:54:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.896 13:54:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.896 13:54:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.896 13:54:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.896 13:54:28 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:37.896 13:54:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:37.896 13:54:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:37.896 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:12:37.896 13:54:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:37.896 13:54:28 -- nvmf/common.sh@469 -- # nvmfpid=3181273 00:12:37.896 13:54:28 -- nvmf/common.sh@470 -- # waitforlisten 3181273 00:12:37.896 13:54:28 -- common/autotest_common.sh@819 -- # '[' -z 3181273 ']' 00:12:37.896 13:54:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.896 13:54:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.896 13:54:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.896 13:54:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.896 13:54:28 -- common/autotest_common.sh@10 -- # set +x 00:12:37.896 [2024-07-23 13:54:28.755547] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:37.896 [2024-07-23 13:54:28.755590] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.896 EAL: No free 2048 kB hugepages reported on node 1 00:12:37.896 [2024-07-23 13:54:28.812699] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:37.896 [2024-07-23 13:54:28.891060] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:37.896 [2024-07-23 13:54:28.891184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.896 [2024-07-23 13:54:28.891192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.896 [2024-07-23 13:54:28.891198] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.896 [2024-07-23 13:54:28.891242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:37.896 [2024-07-23 13:54:28.891342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:37.896 [2024-07-23 13:54:28.891408] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:37.896 [2024-07-23 13:54:28.891409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.833 13:54:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:38.833 13:54:29 -- common/autotest_common.sh@852 -- # return 0 00:12:38.833 13:54:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:38.833 13:54:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:38.833 13:54:29 -- common/autotest_common.sh@10 -- # set +x 00:12:38.833 13:54:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:38.833 13:54:29 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:38.833 13:54:29 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2600 00:12:38.833 [2024-07-23 13:54:29.769847] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:38.833 13:54:29 -- target/invalid.sh@40 -- # out='request: 00:12:38.833 { 00:12:38.833 "nqn": "nqn.2016-06.io.spdk:cnode2600", 00:12:38.833 "tgt_name": "foobar", 00:12:38.833 "method": "nvmf_create_subsystem", 00:12:38.833 "req_id": 1 00:12:38.833 } 00:12:38.833 Got JSON-RPC error response 00:12:38.833 response: 00:12:38.833 { 00:12:38.833 "code": -32603, 00:12:38.833 "message": "Unable to find target foobar" 00:12:38.833 }' 00:12:38.833 13:54:29 -- target/invalid.sh@41 -- # [[ request: 00:12:38.833 { 00:12:38.833 "nqn": "nqn.2016-06.io.spdk:cnode2600", 00:12:38.833 "tgt_name": "foobar", 00:12:38.833 "method": "nvmf_create_subsystem", 00:12:38.833 "req_id": 1 00:12:38.833 } 00:12:38.833 Got JSON-RPC error response 00:12:38.833 response: 00:12:38.833 { 00:12:38.833 "code": -32603, 00:12:38.833 "message": "Unable to find target foobar" 00:12:38.833 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:38.833 13:54:29 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:38.833 13:54:29 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode15983 00:12:39.092 [2024-07-23 13:54:29.958524] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15983: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:39.092 13:54:29 -- target/invalid.sh@45 -- # out='request: 00:12:39.092 { 00:12:39.092 "nqn": "nqn.2016-06.io.spdk:cnode15983", 00:12:39.092 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.092 "method": "nvmf_create_subsystem", 00:12:39.092 "req_id": 1 00:12:39.092 } 00:12:39.092 Got JSON-RPC error response 00:12:39.092 response: 00:12:39.092 { 00:12:39.092 "code": -32602, 00:12:39.092 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.092 }' 00:12:39.092 13:54:29 -- target/invalid.sh@46 -- # [[ request: 00:12:39.092 { 00:12:39.092 "nqn": "nqn.2016-06.io.spdk:cnode15983", 00:12:39.092 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:39.092 "method": "nvmf_create_subsystem", 00:12:39.092 "req_id": 1 00:12:39.092 } 00:12:39.092 Got JSON-RPC error response 00:12:39.092 response: 00:12:39.092 { 00:12:39.092 "code": -32602, 00:12:39.092 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:39.092 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:39.092 13:54:29 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:39.092 13:54:29 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode11764 00:12:39.352 [2024-07-23 13:54:30.143127] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11764: invalid model number 'SPDK_Controller' 00:12:39.352 13:54:30 -- target/invalid.sh@50 -- # out='request: 00:12:39.352 { 00:12:39.352 "nqn": "nqn.2016-06.io.spdk:cnode11764", 00:12:39.352 "model_number": "SPDK_Controller\u001f", 00:12:39.352 "method": "nvmf_create_subsystem", 00:12:39.352 "req_id": 1 00:12:39.352 } 00:12:39.352 Got JSON-RPC error response 00:12:39.352 response: 00:12:39.352 { 00:12:39.352 "code": -32602, 00:12:39.352 "message": "Invalid MN SPDK_Controller\u001f" 00:12:39.352 }' 00:12:39.352 13:54:30 -- target/invalid.sh@51 -- # [[ request: 00:12:39.352 { 00:12:39.352 "nqn": "nqn.2016-06.io.spdk:cnode11764", 00:12:39.352 "model_number": "SPDK_Controller\u001f", 00:12:39.352 "method": "nvmf_create_subsystem", 00:12:39.352 "req_id": 1 00:12:39.352 } 00:12:39.352 Got JSON-RPC error response 00:12:39.352 response: 00:12:39.352 { 00:12:39.352 "code": -32602, 00:12:39.352 "message": "Invalid MN SPDK_Controller\u001f" 00:12:39.352 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:39.352 13:54:30 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:39.352 13:54:30 -- target/invalid.sh@19 -- # local length=21 ll 00:12:39.352 13:54:30 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:39.352 13:54:30 -- target/invalid.sh@21 -- # local chars 00:12:39.352 13:54:30 -- target/invalid.sh@22 -- # local string 00:12:39.352 13:54:30 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:39.352 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.352 13:54:30 -- target/invalid.sh@25 -- # printf %x 91 00:12:39.352 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:39.352 13:54:30 -- target/invalid.sh@25 -- # string+='[' 00:12:39.352 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 74 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=J 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 54 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=6 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 39 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=\' 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 66 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=B 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 86 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=V 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 40 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+='(' 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 81 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=Q 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 121 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=y 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 70 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=F 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 78 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=N 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 47 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=/ 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 40 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+='(' 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 76 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=L 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 53 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=5 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 77 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=M 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 103 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=g 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 116 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=t 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 42 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+='*' 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 59 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=';' 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # printf %x 47 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:39.353 13:54:30 -- target/invalid.sh@25 -- # string+=/ 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.353 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.353 13:54:30 -- target/invalid.sh@28 -- # [[ [ == \- ]] 00:12:39.353 13:54:30 -- target/invalid.sh@31 -- # echo '[J6'\''BV(QyFN/(L5Mgt*;/' 00:12:39.353 13:54:30 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '[J6'\''BV(QyFN/(L5Mgt*;/' nqn.2016-06.io.spdk:cnode24115 00:12:39.613 [2024-07-23 13:54:30.456149] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24115: invalid serial number '[J6'BV(QyFN/(L5Mgt*;/' 00:12:39.613 13:54:30 -- target/invalid.sh@54 -- # out='request: 00:12:39.613 { 00:12:39.613 "nqn": "nqn.2016-06.io.spdk:cnode24115", 00:12:39.613 "serial_number": "[J6'\''BV(QyFN/(L5Mgt*;/", 00:12:39.613 "method": "nvmf_create_subsystem", 00:12:39.613 "req_id": 1 00:12:39.613 } 00:12:39.613 Got JSON-RPC error response 00:12:39.613 response: 00:12:39.613 { 00:12:39.613 "code": -32602, 00:12:39.613 "message": "Invalid SN [J6'\''BV(QyFN/(L5Mgt*;/" 00:12:39.613 }' 00:12:39.613 13:54:30 -- target/invalid.sh@55 -- # [[ request: 00:12:39.613 { 00:12:39.613 "nqn": "nqn.2016-06.io.spdk:cnode24115", 00:12:39.613 "serial_number": "[J6'BV(QyFN/(L5Mgt*;/", 00:12:39.613 "method": "nvmf_create_subsystem", 00:12:39.613 "req_id": 1 00:12:39.613 } 00:12:39.613 Got JSON-RPC error response 00:12:39.613 response: 00:12:39.613 { 00:12:39.613 "code": -32602, 00:12:39.613 "message": "Invalid SN [J6'BV(QyFN/(L5Mgt*;/" 00:12:39.613 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:39.613 13:54:30 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:39.613 13:54:30 -- target/invalid.sh@19 -- # local length=41 ll 00:12:39.613 13:54:30 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:39.613 13:54:30 -- target/invalid.sh@21 -- # local chars 00:12:39.613 13:54:30 -- target/invalid.sh@22 -- # local string 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 84 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=T 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 96 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x60' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+='`' 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 54 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x36' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=6 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 55 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=7 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 49 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=1 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 85 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x55' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=U 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 47 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=/ 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 78 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=N 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 63 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+='?' 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 81 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=Q 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 89 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=Y 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 89 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=Y 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 63 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+='?' 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 46 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=. 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 110 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=n 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 107 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=k 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 75 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=K 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 72 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=H 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 55 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # string+=7 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.613 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.613 13:54:30 -- target/invalid.sh@25 -- # printf %x 70 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # string+=F 00:12:39.614 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.614 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # printf %x 89 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # string+=Y 00:12:39.614 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.614 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # printf %x 82 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:39.614 13:54:30 -- target/invalid.sh@25 -- # string+=R 00:12:39.614 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.614 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 101 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=e 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 95 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=_ 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 84 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=T 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 59 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=';' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 42 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+='*' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 91 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+='[' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 37 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=% 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 102 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=f 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 35 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+='#' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 98 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x62' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=b 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 49 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=1 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 44 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=, 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 119 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=w 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 124 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+='|' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 116 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=t 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 38 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+='&' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 63 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+='?' 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 95 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=_ 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # printf %x 95 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:39.873 13:54:30 -- target/invalid.sh@25 -- # string+=_ 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:39.873 13:54:30 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:39.873 13:54:30 -- target/invalid.sh@28 -- # [[ T == \- ]] 00:12:39.873 13:54:30 -- target/invalid.sh@31 -- # echo 'T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__' 00:12:39.873 13:54:30 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__' nqn.2016-06.io.spdk:cnode15690 00:12:40.132 [2024-07-23 13:54:30.901667] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15690: invalid model number 'T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__' 00:12:40.132 13:54:30 -- target/invalid.sh@58 -- # out='request: 00:12:40.132 { 00:12:40.132 "nqn": "nqn.2016-06.io.spdk:cnode15690", 00:12:40.132 "model_number": "T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__", 00:12:40.132 "method": "nvmf_create_subsystem", 00:12:40.132 "req_id": 1 00:12:40.132 } 00:12:40.132 Got JSON-RPC error response 00:12:40.132 response: 00:12:40.132 { 00:12:40.132 "code": -32602, 00:12:40.132 "message": "Invalid MN T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__" 00:12:40.132 }' 00:12:40.132 13:54:30 -- target/invalid.sh@59 -- # [[ request: 00:12:40.132 { 00:12:40.132 "nqn": "nqn.2016-06.io.spdk:cnode15690", 00:12:40.132 "model_number": "T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__", 00:12:40.132 "method": "nvmf_create_subsystem", 00:12:40.132 "req_id": 1 00:12:40.132 } 00:12:40.132 Got JSON-RPC error response 00:12:40.132 response: 00:12:40.132 { 00:12:40.132 "code": -32602, 00:12:40.132 "message": "Invalid MN T`671U/N?QYY?.nkKH7FYRe_T;*[%f#b1,w|t&?__" 00:12:40.132 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:40.132 13:54:30 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:40.132 [2024-07-23 13:54:31.086340] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.132 13:54:31 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:40.390 13:54:31 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:40.390 13:54:31 -- target/invalid.sh@67 -- # echo '' 00:12:40.390 13:54:31 -- target/invalid.sh@67 -- # head -n 1 00:12:40.390 13:54:31 -- target/invalid.sh@67 -- # IP= 00:12:40.390 13:54:31 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:40.650 [2024-07-23 13:54:31.447561] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:40.650 13:54:31 -- target/invalid.sh@69 -- # out='request: 00:12:40.650 { 00:12:40.650 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:40.650 "listen_address": { 00:12:40.650 "trtype": "tcp", 00:12:40.650 "traddr": "", 00:12:40.650 "trsvcid": "4421" 00:12:40.650 }, 00:12:40.650 "method": "nvmf_subsystem_remove_listener", 00:12:40.650 "req_id": 1 00:12:40.650 } 00:12:40.650 Got JSON-RPC error response 00:12:40.650 response: 00:12:40.650 { 00:12:40.650 "code": -32602, 00:12:40.650 "message": "Invalid parameters" 00:12:40.650 }' 00:12:40.650 13:54:31 -- target/invalid.sh@70 -- # [[ request: 00:12:40.650 { 00:12:40.650 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:40.650 "listen_address": { 00:12:40.650 "trtype": "tcp", 00:12:40.650 "traddr": "", 00:12:40.650 "trsvcid": "4421" 00:12:40.650 }, 00:12:40.650 "method": "nvmf_subsystem_remove_listener", 00:12:40.650 "req_id": 1 00:12:40.650 } 00:12:40.650 Got JSON-RPC error response 00:12:40.650 response: 00:12:40.650 { 00:12:40.650 "code": -32602, 00:12:40.650 "message": "Invalid parameters" 00:12:40.650 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:40.650 13:54:31 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27339 -i 0 00:12:40.650 [2024-07-23 13:54:31.620097] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27339: invalid cntlid range [0-65519] 00:12:40.650 13:54:31 -- target/invalid.sh@73 -- # out='request: 00:12:40.650 { 00:12:40.650 "nqn": "nqn.2016-06.io.spdk:cnode27339", 00:12:40.650 "min_cntlid": 0, 00:12:40.650 "method": "nvmf_create_subsystem", 00:12:40.650 "req_id": 1 00:12:40.650 } 00:12:40.650 Got JSON-RPC error response 00:12:40.650 response: 00:12:40.650 { 00:12:40.650 "code": -32602, 00:12:40.650 "message": "Invalid cntlid range [0-65519]" 00:12:40.650 }' 00:12:40.650 13:54:31 -- target/invalid.sh@74 -- # [[ request: 00:12:40.650 { 00:12:40.650 "nqn": "nqn.2016-06.io.spdk:cnode27339", 00:12:40.650 "min_cntlid": 0, 00:12:40.650 "method": "nvmf_create_subsystem", 00:12:40.650 "req_id": 1 00:12:40.650 } 00:12:40.650 Got JSON-RPC error response 00:12:40.650 response: 00:12:40.650 { 00:12:40.650 "code": -32602, 00:12:40.650 "message": "Invalid cntlid range [0-65519]" 00:12:40.650 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:40.650 13:54:31 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2281 -i 65520 00:12:40.910 [2024-07-23 13:54:31.792695] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2281: invalid cntlid range [65520-65519] 00:12:40.910 13:54:31 -- target/invalid.sh@75 -- # out='request: 00:12:40.910 { 00:12:40.910 "nqn": "nqn.2016-06.io.spdk:cnode2281", 00:12:40.910 "min_cntlid": 65520, 00:12:40.910 "method": "nvmf_create_subsystem", 00:12:40.910 "req_id": 1 00:12:40.910 } 00:12:40.910 Got JSON-RPC error response 00:12:40.910 response: 00:12:40.910 { 00:12:40.910 "code": -32602, 00:12:40.910 "message": "Invalid cntlid range [65520-65519]" 00:12:40.910 }' 00:12:40.910 13:54:31 -- target/invalid.sh@76 -- # [[ request: 00:12:40.910 { 00:12:40.910 "nqn": "nqn.2016-06.io.spdk:cnode2281", 00:12:40.910 "min_cntlid": 65520, 00:12:40.910 "method": "nvmf_create_subsystem", 00:12:40.910 "req_id": 1 00:12:40.910 } 00:12:40.910 Got JSON-RPC error response 00:12:40.910 response: 00:12:40.910 { 00:12:40.910 "code": -32602, 00:12:40.910 "message": "Invalid cntlid range [65520-65519]" 00:12:40.910 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:40.910 13:54:31 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13481 -I 0 00:12:41.170 [2024-07-23 13:54:31.965300] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13481: invalid cntlid range [1-0] 00:12:41.170 13:54:31 -- target/invalid.sh@77 -- # out='request: 00:12:41.170 { 00:12:41.170 "nqn": "nqn.2016-06.io.spdk:cnode13481", 00:12:41.170 "max_cntlid": 0, 00:12:41.170 "method": "nvmf_create_subsystem", 00:12:41.170 "req_id": 1 00:12:41.170 } 00:12:41.170 Got JSON-RPC error response 00:12:41.170 response: 00:12:41.170 { 00:12:41.170 "code": -32602, 00:12:41.170 "message": "Invalid cntlid range [1-0]" 00:12:41.170 }' 00:12:41.170 13:54:31 -- target/invalid.sh@78 -- # [[ request: 00:12:41.170 { 00:12:41.170 "nqn": "nqn.2016-06.io.spdk:cnode13481", 00:12:41.170 "max_cntlid": 0, 00:12:41.170 "method": "nvmf_create_subsystem", 00:12:41.170 "req_id": 1 00:12:41.170 } 00:12:41.170 Got JSON-RPC error response 00:12:41.170 response: 00:12:41.170 { 00:12:41.170 "code": -32602, 00:12:41.170 "message": "Invalid cntlid range [1-0]" 00:12:41.170 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.170 13:54:31 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25714 -I 65520 00:12:41.170 [2024-07-23 13:54:32.145881] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25714: invalid cntlid range [1-65520] 00:12:41.170 13:54:32 -- target/invalid.sh@79 -- # out='request: 00:12:41.170 { 00:12:41.170 "nqn": "nqn.2016-06.io.spdk:cnode25714", 00:12:41.170 "max_cntlid": 65520, 00:12:41.170 "method": "nvmf_create_subsystem", 00:12:41.170 "req_id": 1 00:12:41.170 } 00:12:41.170 Got JSON-RPC error response 00:12:41.170 response: 00:12:41.170 { 00:12:41.170 "code": -32602, 00:12:41.170 "message": "Invalid cntlid range [1-65520]" 00:12:41.170 }' 00:12:41.170 13:54:32 -- target/invalid.sh@80 -- # [[ request: 00:12:41.170 { 00:12:41.170 "nqn": "nqn.2016-06.io.spdk:cnode25714", 00:12:41.170 "max_cntlid": 65520, 00:12:41.170 "method": "nvmf_create_subsystem", 00:12:41.170 "req_id": 1 00:12:41.170 } 00:12:41.170 Got JSON-RPC error response 00:12:41.170 response: 00:12:41.170 { 00:12:41.170 "code": -32602, 00:12:41.170 "message": "Invalid cntlid range [1-65520]" 00:12:41.170 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.170 13:54:32 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12793 -i 6 -I 5 00:12:41.429 [2024-07-23 13:54:32.334555] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12793: invalid cntlid range [6-5] 00:12:41.429 13:54:32 -- target/invalid.sh@83 -- # out='request: 00:12:41.429 { 00:12:41.429 "nqn": "nqn.2016-06.io.spdk:cnode12793", 00:12:41.429 "min_cntlid": 6, 00:12:41.429 "max_cntlid": 5, 00:12:41.429 "method": "nvmf_create_subsystem", 00:12:41.429 "req_id": 1 00:12:41.429 } 00:12:41.429 Got JSON-RPC error response 00:12:41.429 response: 00:12:41.429 { 00:12:41.429 "code": -32602, 00:12:41.429 "message": "Invalid cntlid range [6-5]" 00:12:41.429 }' 00:12:41.429 13:54:32 -- target/invalid.sh@84 -- # [[ request: 00:12:41.429 { 00:12:41.429 "nqn": "nqn.2016-06.io.spdk:cnode12793", 00:12:41.429 "min_cntlid": 6, 00:12:41.429 "max_cntlid": 5, 00:12:41.429 "method": "nvmf_create_subsystem", 00:12:41.429 "req_id": 1 00:12:41.429 } 00:12:41.429 Got JSON-RPC error response 00:12:41.429 response: 00:12:41.429 { 00:12:41.429 "code": -32602, 00:12:41.429 "message": "Invalid cntlid range [6-5]" 00:12:41.429 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:41.429 13:54:32 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:41.689 13:54:32 -- target/invalid.sh@87 -- # out='request: 00:12:41.690 { 00:12:41.690 "name": "foobar", 00:12:41.690 "method": "nvmf_delete_target", 00:12:41.690 "req_id": 1 00:12:41.690 } 00:12:41.690 Got JSON-RPC error response 00:12:41.690 response: 00:12:41.690 { 00:12:41.690 "code": -32602, 00:12:41.690 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:41.690 }' 00:12:41.690 13:54:32 -- target/invalid.sh@88 -- # [[ request: 00:12:41.690 { 00:12:41.690 "name": "foobar", 00:12:41.690 "method": "nvmf_delete_target", 00:12:41.690 "req_id": 1 00:12:41.690 } 00:12:41.690 Got JSON-RPC error response 00:12:41.690 response: 00:12:41.690 { 00:12:41.690 "code": -32602, 00:12:41.690 "message": "The specified target doesn't exist, cannot delete it." 00:12:41.690 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:41.690 13:54:32 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:41.690 13:54:32 -- target/invalid.sh@91 -- # nvmftestfini 00:12:41.690 13:54:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:41.690 13:54:32 -- nvmf/common.sh@116 -- # sync 00:12:41.690 13:54:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:41.690 13:54:32 -- nvmf/common.sh@119 -- # set +e 00:12:41.690 13:54:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:41.690 13:54:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:41.690 rmmod nvme_tcp 00:12:41.690 rmmod nvme_fabrics 00:12:41.690 rmmod nvme_keyring 00:12:41.690 13:54:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:41.690 13:54:32 -- nvmf/common.sh@123 -- # set -e 00:12:41.690 13:54:32 -- nvmf/common.sh@124 -- # return 0 00:12:41.690 13:54:32 -- nvmf/common.sh@477 -- # '[' -n 3181273 ']' 00:12:41.690 13:54:32 -- nvmf/common.sh@478 -- # killprocess 3181273 00:12:41.690 13:54:32 -- common/autotest_common.sh@926 -- # '[' -z 3181273 ']' 00:12:41.690 13:54:32 -- common/autotest_common.sh@930 -- # kill -0 3181273 00:12:41.690 13:54:32 -- common/autotest_common.sh@931 -- # uname 00:12:41.690 13:54:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:41.690 13:54:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3181273 00:12:41.690 13:54:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:41.690 13:54:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:41.690 13:54:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3181273' 00:12:41.690 killing process with pid 3181273 00:12:41.690 13:54:32 -- common/autotest_common.sh@945 -- # kill 3181273 00:12:41.690 13:54:32 -- common/autotest_common.sh@950 -- # wait 3181273 00:12:41.950 13:54:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:41.950 13:54:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:41.950 13:54:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:41.950 13:54:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:41.950 13:54:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:41.950 13:54:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.950 13:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:41.950 13:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.855 13:54:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:43.855 00:12:43.855 real 0m11.452s 00:12:43.855 user 0m19.221s 00:12:43.855 sys 0m4.774s 00:12:43.855 13:54:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.856 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:12:43.856 ************************************ 00:12:43.856 END TEST nvmf_invalid 00:12:43.856 ************************************ 00:12:44.115 13:54:34 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:44.115 13:54:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:44.115 13:54:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:44.115 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:12:44.115 ************************************ 00:12:44.115 START TEST nvmf_abort 00:12:44.115 ************************************ 00:12:44.115 13:54:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:44.115 * Looking for test storage... 00:12:44.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.115 13:54:34 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.115 13:54:34 -- nvmf/common.sh@7 -- # uname -s 00:12:44.115 13:54:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.115 13:54:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.115 13:54:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.115 13:54:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.115 13:54:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.115 13:54:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.115 13:54:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.116 13:54:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.116 13:54:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.116 13:54:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.116 13:54:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.116 13:54:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:44.116 13:54:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.116 13:54:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.116 13:54:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.116 13:54:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.116 13:54:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.116 13:54:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.116 13:54:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.116 13:54:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.116 13:54:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.116 13:54:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.116 13:54:34 -- paths/export.sh@5 -- # export PATH 00:12:44.116 13:54:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.116 13:54:34 -- nvmf/common.sh@46 -- # : 0 00:12:44.116 13:54:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:44.116 13:54:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:44.116 13:54:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:44.116 13:54:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.116 13:54:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.116 13:54:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:44.116 13:54:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:44.116 13:54:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:44.116 13:54:34 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:44.116 13:54:34 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:44.116 13:54:34 -- target/abort.sh@14 -- # nvmftestinit 00:12:44.116 13:54:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:44.116 13:54:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.116 13:54:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:44.116 13:54:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:44.116 13:54:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:44.116 13:54:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.116 13:54:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.116 13:54:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.116 13:54:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:44.116 13:54:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:44.116 13:54:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:44.116 13:54:34 -- common/autotest_common.sh@10 -- # set +x 00:12:49.454 13:54:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:49.454 13:54:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:49.454 13:54:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:49.454 13:54:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:49.454 13:54:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:49.454 13:54:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:49.454 13:54:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:49.454 13:54:40 -- nvmf/common.sh@294 -- # net_devs=() 00:12:49.454 13:54:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:49.454 13:54:40 -- nvmf/common.sh@295 -- # e810=() 00:12:49.454 13:54:40 -- nvmf/common.sh@295 -- # local -ga e810 00:12:49.454 13:54:40 -- nvmf/common.sh@296 -- # x722=() 00:12:49.454 13:54:40 -- nvmf/common.sh@296 -- # local -ga x722 00:12:49.454 13:54:40 -- nvmf/common.sh@297 -- # mlx=() 00:12:49.454 13:54:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:49.454 13:54:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.454 13:54:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:49.454 13:54:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:49.454 13:54:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:49.454 13:54:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:49.454 13:54:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.454 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.454 13:54:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:49.454 13:54:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.454 13:54:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:49.454 13:54:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:49.454 13:54:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.454 13:54:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:49.454 13:54:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.454 13:54:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.454 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.454 13:54:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.454 13:54:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:49.454 13:54:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.454 13:54:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:49.454 13:54:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.454 13:54:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.454 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.454 13:54:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.454 13:54:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:49.454 13:54:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:49.454 13:54:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:49.454 13:54:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:49.454 13:54:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:49.454 13:54:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.454 13:54:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:49.454 13:54:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:49.454 13:54:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:49.454 13:54:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:49.454 13:54:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:49.454 13:54:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:49.454 13:54:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.454 13:54:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:49.454 13:54:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:49.455 13:54:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:49.455 13:54:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:49.455 13:54:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:49.455 13:54:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:49.455 13:54:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:49.455 13:54:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:49.455 13:54:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:49.455 13:54:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:49.455 13:54:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:49.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:12:49.455 00:12:49.455 --- 10.0.0.2 ping statistics --- 00:12:49.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.455 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:12:49.455 13:54:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:49.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:12:49.455 00:12:49.455 --- 10.0.0.1 ping statistics --- 00:12:49.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.455 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:49.455 13:54:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.455 13:54:40 -- nvmf/common.sh@410 -- # return 0 00:12:49.455 13:54:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:49.455 13:54:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.455 13:54:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:49.455 13:54:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:49.455 13:54:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.455 13:54:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:49.455 13:54:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:49.455 13:54:40 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:49.455 13:54:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:49.455 13:54:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:49.455 13:54:40 -- common/autotest_common.sh@10 -- # set +x 00:12:49.715 13:54:40 -- nvmf/common.sh@469 -- # nvmfpid=3185478 00:12:49.715 13:54:40 -- nvmf/common.sh@470 -- # waitforlisten 3185478 00:12:49.715 13:54:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:49.715 13:54:40 -- common/autotest_common.sh@819 -- # '[' -z 3185478 ']' 00:12:49.715 13:54:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.715 13:54:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:49.715 13:54:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.715 13:54:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:49.715 13:54:40 -- common/autotest_common.sh@10 -- # set +x 00:12:49.715 [2024-07-23 13:54:40.508399] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:49.715 [2024-07-23 13:54:40.508446] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.715 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.715 [2024-07-23 13:54:40.567087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:49.715 [2024-07-23 13:54:40.644780] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:49.715 [2024-07-23 13:54:40.644898] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.715 [2024-07-23 13:54:40.644907] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.715 [2024-07-23 13:54:40.644914] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.715 [2024-07-23 13:54:40.645019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.715 [2024-07-23 13:54:40.645106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.715 [2024-07-23 13:54:40.645108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.653 13:54:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:50.653 13:54:41 -- common/autotest_common.sh@852 -- # return 0 00:12:50.653 13:54:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:50.654 13:54:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 13:54:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.654 13:54:41 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 [2024-07-23 13:54:41.357661] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 Malloc0 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 Delay0 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 [2024-07-23 13:54:41.432205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.654 13:54:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.654 13:54:41 -- common/autotest_common.sh@10 -- # set +x 00:12:50.654 13:54:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.654 13:54:41 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:50.654 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.654 [2024-07-23 13:54:41.503497] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:53.192 Initializing NVMe Controllers 00:12:53.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:53.192 controller IO queue size 128 less than required 00:12:53.192 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:53.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:53.192 Initialization complete. Launching workers. 00:12:53.192 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41072 00:12:53.192 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 41133, failed to submit 62 00:12:53.192 success 41072, unsuccess 61, failed 0 00:12:53.192 13:54:43 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:53.192 13:54:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.192 13:54:43 -- common/autotest_common.sh@10 -- # set +x 00:12:53.192 13:54:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.192 13:54:43 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:53.192 13:54:43 -- target/abort.sh@38 -- # nvmftestfini 00:12:53.192 13:54:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:53.192 13:54:43 -- nvmf/common.sh@116 -- # sync 00:12:53.192 13:54:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:53.192 13:54:43 -- nvmf/common.sh@119 -- # set +e 00:12:53.192 13:54:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:53.192 13:54:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:53.192 rmmod nvme_tcp 00:12:53.192 rmmod nvme_fabrics 00:12:53.192 rmmod nvme_keyring 00:12:53.192 13:54:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:53.192 13:54:43 -- nvmf/common.sh@123 -- # set -e 00:12:53.192 13:54:43 -- nvmf/common.sh@124 -- # return 0 00:12:53.192 13:54:43 -- nvmf/common.sh@477 -- # '[' -n 3185478 ']' 00:12:53.192 13:54:43 -- nvmf/common.sh@478 -- # killprocess 3185478 00:12:53.192 13:54:43 -- common/autotest_common.sh@926 -- # '[' -z 3185478 ']' 00:12:53.192 13:54:43 -- common/autotest_common.sh@930 -- # kill -0 3185478 00:12:53.192 13:54:43 -- common/autotest_common.sh@931 -- # uname 00:12:53.192 13:54:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:53.192 13:54:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3185478 00:12:53.192 13:54:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:53.192 13:54:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:53.192 13:54:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3185478' 00:12:53.192 killing process with pid 3185478 00:12:53.192 13:54:43 -- common/autotest_common.sh@945 -- # kill 3185478 00:12:53.192 13:54:43 -- common/autotest_common.sh@950 -- # wait 3185478 00:12:53.192 13:54:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:53.192 13:54:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:53.192 13:54:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:53.192 13:54:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.192 13:54:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:53.192 13:54:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.192 13:54:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.192 13:54:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.099 13:54:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:55.099 00:12:55.099 real 0m11.140s 00:12:55.099 user 0m13.087s 00:12:55.099 sys 0m5.053s 00:12:55.099 13:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.099 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 ************************************ 00:12:55.099 END TEST nvmf_abort 00:12:55.099 ************************************ 00:12:55.099 13:54:46 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:55.099 13:54:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:55.099 13:54:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.099 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:12:55.099 ************************************ 00:12:55.099 START TEST nvmf_ns_hotplug_stress 00:12:55.099 ************************************ 00:12:55.099 13:54:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:55.359 * Looking for test storage... 00:12:55.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.359 13:54:46 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.359 13:54:46 -- nvmf/common.sh@7 -- # uname -s 00:12:55.359 13:54:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.359 13:54:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.359 13:54:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.359 13:54:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.359 13:54:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.359 13:54:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.359 13:54:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.359 13:54:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.359 13:54:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.359 13:54:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.359 13:54:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.359 13:54:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:55.359 13:54:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.359 13:54:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.359 13:54:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.359 13:54:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.359 13:54:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.359 13:54:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.359 13:54:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.359 13:54:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.359 13:54:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.359 13:54:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.359 13:54:46 -- paths/export.sh@5 -- # export PATH 00:12:55.359 13:54:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.359 13:54:46 -- nvmf/common.sh@46 -- # : 0 00:12:55.359 13:54:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:55.359 13:54:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:55.359 13:54:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:55.359 13:54:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.359 13:54:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.359 13:54:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:55.359 13:54:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:55.359 13:54:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:55.359 13:54:46 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.359 13:54:46 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:55.359 13:54:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:55.359 13:54:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.359 13:54:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:55.359 13:54:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:55.359 13:54:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:55.359 13:54:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.359 13:54:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.359 13:54:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.359 13:54:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:55.359 13:54:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:55.359 13:54:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:55.359 13:54:46 -- common/autotest_common.sh@10 -- # set +x 00:13:00.636 13:54:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:00.636 13:54:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:00.636 13:54:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:00.636 13:54:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:00.636 13:54:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:00.636 13:54:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:00.636 13:54:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:00.636 13:54:51 -- nvmf/common.sh@294 -- # net_devs=() 00:13:00.636 13:54:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:00.636 13:54:51 -- nvmf/common.sh@295 -- # e810=() 00:13:00.636 13:54:51 -- nvmf/common.sh@295 -- # local -ga e810 00:13:00.636 13:54:51 -- nvmf/common.sh@296 -- # x722=() 00:13:00.636 13:54:51 -- nvmf/common.sh@296 -- # local -ga x722 00:13:00.636 13:54:51 -- nvmf/common.sh@297 -- # mlx=() 00:13:00.636 13:54:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:00.636 13:54:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.636 13:54:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:00.636 13:54:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:00.636 13:54:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:00.636 13:54:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:00.636 13:54:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:00.636 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:00.636 13:54:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:00.636 13:54:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:00.636 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:00.636 13:54:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.636 13:54:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.637 13:54:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:00.637 13:54:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:00.637 13:54:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:00.637 13:54:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:00.637 13:54:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:00.637 13:54:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.637 13:54:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:00.637 13:54:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.637 13:54:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:00.637 Found net devices under 0000:86:00.0: cvl_0_0 00:13:00.637 13:54:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.637 13:54:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:00.637 13:54:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.637 13:54:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:00.637 13:54:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.637 13:54:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:00.637 Found net devices under 0000:86:00.1: cvl_0_1 00:13:00.637 13:54:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.637 13:54:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:00.637 13:54:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:00.637 13:54:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:00.637 13:54:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:00.637 13:54:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:00.637 13:54:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.637 13:54:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.637 13:54:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.637 13:54:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:00.637 13:54:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.637 13:54:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.637 13:54:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:00.637 13:54:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.637 13:54:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.637 13:54:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:00.637 13:54:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:00.637 13:54:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.637 13:54:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.637 13:54:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.637 13:54:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.637 13:54:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:00.637 13:54:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.637 13:54:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.637 13:54:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.897 13:54:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:00.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:13:00.897 00:13:00.897 --- 10.0.0.2 ping statistics --- 00:13:00.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.897 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:13:00.897 13:54:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:13:00.897 00:13:00.897 --- 10.0.0.1 ping statistics --- 00:13:00.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.897 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:13:00.897 13:54:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.897 13:54:51 -- nvmf/common.sh@410 -- # return 0 00:13:00.897 13:54:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:00.897 13:54:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.897 13:54:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:00.897 13:54:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:00.897 13:54:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.897 13:54:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:00.897 13:54:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:00.897 13:54:51 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:00.897 13:54:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:00.897 13:54:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:00.897 13:54:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.897 13:54:51 -- nvmf/common.sh@469 -- # nvmfpid=3189550 00:13:00.897 13:54:51 -- nvmf/common.sh@470 -- # waitforlisten 3189550 00:13:00.897 13:54:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:00.897 13:54:51 -- common/autotest_common.sh@819 -- # '[' -z 3189550 ']' 00:13:00.897 13:54:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.897 13:54:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:00.897 13:54:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.897 13:54:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:00.897 13:54:51 -- common/autotest_common.sh@10 -- # set +x 00:13:00.897 [2024-07-23 13:54:51.753634] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:00.897 [2024-07-23 13:54:51.753680] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.897 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.897 [2024-07-23 13:54:51.813321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:00.897 [2024-07-23 13:54:51.884622] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:00.897 [2024-07-23 13:54:51.884737] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.897 [2024-07-23 13:54:51.884745] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.897 [2024-07-23 13:54:51.884751] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.897 [2024-07-23 13:54:51.884855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.897 [2024-07-23 13:54:51.884941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:00.897 [2024-07-23 13:54:51.884942] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.835 13:54:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:01.835 13:54:52 -- common/autotest_common.sh@852 -- # return 0 00:13:01.835 13:54:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:01.835 13:54:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:01.835 13:54:52 -- common/autotest_common.sh@10 -- # set +x 00:13:01.835 13:54:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.835 13:54:52 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:01.835 13:54:52 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:01.835 [2024-07-23 13:54:52.742435] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.835 13:54:52 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.095 13:54:52 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.095 [2024-07-23 13:54:53.099806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.354 13:54:53 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:02.354 13:54:53 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:02.613 Malloc0 00:13:02.613 13:54:53 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:02.872 Delay0 00:13:02.872 13:54:53 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.872 13:54:53 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:03.131 NULL1 00:13:03.131 13:54:54 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:03.390 13:54:54 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:03.390 13:54:54 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3189992 00:13:03.390 13:54:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:03.390 13:54:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.390 EAL: No free 2048 kB hugepages reported on node 1 00:13:04.328 Read completed with error (sct=0, sc=11) 00:13:04.328 13:54:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.328 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.587 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.587 13:54:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:04.587 13:54:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:04.847 true 00:13:04.847 13:54:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:04.847 13:54:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.786 13:54:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.786 13:54:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:05.786 13:54:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:06.046 true 00:13:06.046 13:54:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:06.046 13:54:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.046 13:54:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.305 13:54:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:06.305 13:54:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:06.565 true 00:13:06.565 13:54:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:06.565 13:54:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.503 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.503 13:54:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:07.762 13:54:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:07.762 13:54:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:08.021 true 00:13:08.021 13:54:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:08.021 13:54:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.984 13:54:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.984 13:54:59 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:08.984 13:54:59 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:09.243 true 00:13:09.243 13:55:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:09.243 13:55:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.502 13:55:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.502 13:55:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:09.502 13:55:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:09.762 true 00:13:09.762 13:55:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:09.762 13:55:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.141 13:55:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.141 13:55:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:11.141 13:55:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:11.400 true 00:13:11.400 13:55:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:11.400 13:55:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.338 13:55:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.338 13:55:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:12.338 13:55:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:12.338 true 00:13:12.338 13:55:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:12.338 13:55:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.596 13:55:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.854 13:55:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:12.854 13:55:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:12.854 true 00:13:12.854 13:55:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:12.854 13:55:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.112 13:55:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.371 13:55:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:13.371 13:55:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:13.371 true 00:13:13.630 13:55:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:13.630 13:55:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.630 13:55:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.889 13:55:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:13.889 13:55:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:13.889 true 00:13:14.147 13:55:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:14.147 13:55:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.147 13:55:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.429 [2024-07-23 13:55:05.249415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.249994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.250965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.251991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.252967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.253017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.253075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.429 [2024-07-23 13:55:05.253115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.253995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.254977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.255984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.256983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.430 [2024-07-23 13:55:05.257580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.257959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.258745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.259977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.260985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.261964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.431 [2024-07-23 13:55:05.262351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.262962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.263986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.264938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.265977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.432 [2024-07-23 13:55:05.266923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.266968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.267722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.268967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.269988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.433 [2024-07-23 13:55:05.270293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.270983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.271956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.434 [2024-07-23 13:55:05.272616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.272969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.273985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.434 [2024-07-23 13:55:05.274957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.275950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 13:55:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:14.435 [2024-07-23 13:55:05.275995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 13:55:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:14.435 [2024-07-23 13:55:05.276381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.276984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.277962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.278965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.435 [2024-07-23 13:55:05.279557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.279987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.280686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.281958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.282969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.283960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.436 [2024-07-23 13:55:05.284772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.284805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.284837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.284876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.284925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.284972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.285972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.286978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.287954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.288984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.437 [2024-07-23 13:55:05.289550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.289991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.290967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.291989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.438 [2024-07-23 13:55:05.292283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.292968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.293822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.439 [2024-07-23 13:55:05.294786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.294835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.294899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.294943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.294994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.295965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.296991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.297975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.298956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.440 [2024-07-23 13:55:05.299397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.299964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.300957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.301975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.302980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.303818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.441 [2024-07-23 13:55:05.304344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.304972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.305986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.306658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.307984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.308980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.309021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.309069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.442 [2024-07-23 13:55:05.309118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.309905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.310964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.311977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.312957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.313959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.314006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.314058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.314111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.443 [2024-07-23 13:55:05.314155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.314965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.315966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.316995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.317966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.318005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.318035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.318069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.444 [2024-07-23 13:55:05.318112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.318977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.319967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.320966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.321981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.445 [2024-07-23 13:55:05.322763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.322877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.322928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.322976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.446 [2024-07-23 13:55:05.323647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.323986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.324985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.325951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.326999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.446 [2024-07-23 13:55:05.327730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.327785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.327828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.327879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.327927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.327974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.328976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.329995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.330988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.331972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.447 [2024-07-23 13:55:05.332305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.332993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.333965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.334999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.335849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.448 [2024-07-23 13:55:05.336639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.336972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.337995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.338692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.339958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.340984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.449 [2024-07-23 13:55:05.341317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.341999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.342974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.343984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.344996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.345982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.346031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.346085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.346136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.346183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.346234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.450 [2024-07-23 13:55:05.346279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.346990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.347993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.348973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.349998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.451 [2024-07-23 13:55:05.350890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.350942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.350988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.351998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.352999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.353994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.354824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.452 [2024-07-23 13:55:05.355846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.355878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.355910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.355942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.355990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.356964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.357654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.358998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.453 [2024-07-23 13:55:05.359929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.359972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.360980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.361985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.362981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.363984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.454 [2024-07-23 13:55:05.364785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.364824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.364857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.364903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.364950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.364992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.365973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.366960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.367999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.368960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.455 [2024-07-23 13:55:05.369391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.369984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.370969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.371979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.372969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.373816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.456 [2024-07-23 13:55:05.374223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.456 [2024-07-23 13:55:05.374370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.374994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.375955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.376729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.377999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.457 [2024-07-23 13:55:05.378595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.378953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.379935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.380975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.381991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.382951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.383000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.383054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.383107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.383153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.458 [2024-07-23 13:55:05.383502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.383988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.384997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.385997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.386961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.387991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.388049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.388090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.388130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.459 [2024-07-23 13:55:05.388161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.388956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.389963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.390962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.391978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.460 [2024-07-23 13:55:05.392926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.392977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.393970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.394975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.395998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.396966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.461 [2024-07-23 13:55:05.397915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.397970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.398958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.399983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.400975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.462 [2024-07-23 13:55:05.401811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.401858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.401908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.401957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.402993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.403974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.404998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.405934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.463 [2024-07-23 13:55:05.406909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.406962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.407959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.408943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.409978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.410993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.464 [2024-07-23 13:55:05.411699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.411746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.411799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.411851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.411896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.411938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.411983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.412982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.413989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.414966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.415967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.465 [2024-07-23 13:55:05.416739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.416774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.416804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.416842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.416882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.416922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.416962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.417979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.418836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.419960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.420973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.466 [2024-07-23 13:55:05.421441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.421850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.422961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.467 [2024-07-23 13:55:05.423825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.751 [2024-07-23 13:55:05.423872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.751 [2024-07-23 13:55:05.423927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.423976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.424976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.752 [2024-07-23 13:55:05.425599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.425956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.426976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.427979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.752 [2024-07-23 13:55:05.428882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.428933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.428980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.429974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.430998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.431748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.432998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.753 [2024-07-23 13:55:05.433676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.433727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.433779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.433826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.433874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.433926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.433973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.434792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.435983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.436951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.437962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.754 [2024-07-23 13:55:05.438771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.438823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.438872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.438922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.438966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 true 00:13:14.755 [2024-07-23 13:55:05.439668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.439954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.440972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.441999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.442946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.755 [2024-07-23 13:55:05.443548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.443989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.444999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.445982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.446967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.756 [2024-07-23 13:55:05.447462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.447505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.447543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.447593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.447625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.447969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.448982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.449965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.450980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.451988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.757 [2024-07-23 13:55:05.452440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.452988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.453965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.454982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.455964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.456985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.457016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.457068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.457109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.758 [2024-07-23 13:55:05.457151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.457962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.458986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.459975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.460978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.461970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.462003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.462048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.462093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.462130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.462163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.759 [2024-07-23 13:55:05.462196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.462988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.463930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 13:55:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:14.760 [2024-07-23 13:55:05.464426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 13:55:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.760 [2024-07-23 13:55:05.464774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.464976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.465961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.760 [2024-07-23 13:55:05.466958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.467977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.468989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.469997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.470991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.761 [2024-07-23 13:55:05.471342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.471977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.472987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.473974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.762 [2024-07-23 13:55:05.474875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.474986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.475979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.476025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.762 [2024-07-23 13:55:05.476083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.476838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.477985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.478971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.479854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.480996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.481039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.763 [2024-07-23 13:55:05.481091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.481955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.482987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.483980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.484981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.764 [2024-07-23 13:55:05.485879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.485924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.485971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.486966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.487976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.488954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.489987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.490028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.490088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.490142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.765 [2024-07-23 13:55:05.490189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.490986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.491994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.492976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.493964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.766 [2024-07-23 13:55:05.494887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.494937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.494985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.495969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.496968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.497979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.498962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.767 [2024-07-23 13:55:05.499848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.499899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.499943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.499993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.500951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.501975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.502959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.503990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.768 [2024-07-23 13:55:05.504490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.504955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.505978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.506966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.507981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.508995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.769 [2024-07-23 13:55:05.509507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.509996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.510966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.511660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.512985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.770 [2024-07-23 13:55:05.513476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.513979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.514967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.515991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.516967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.517990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.771 [2024-07-23 13:55:05.518580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.518952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.519980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.520972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.521982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.522978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.523027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.523080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.523121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.523151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.772 [2024-07-23 13:55:05.523186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.523949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.524946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.773 [2024-07-23 13:55:05.524984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.525969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.526964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.527988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.528036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.773 [2024-07-23 13:55:05.528092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.528958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.529971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.530976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.531975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.532017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.532071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.532102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.532134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.774 [2024-07-23 13:55:05.532174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.532991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.533985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.534971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.535977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.775 [2024-07-23 13:55:05.536721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.536773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.536825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.536871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.537995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.538998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.539983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.540985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.776 [2024-07-23 13:55:05.541829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.541873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.541923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.541972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.542961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.543967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.544967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.545965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.777 [2024-07-23 13:55:05.546523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.546627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.546674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.546716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.546767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.547958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.548978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.549718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.550991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.778 [2024-07-23 13:55:05.551517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.551979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552607] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.552995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.553955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.554995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.779 [2024-07-23 13:55:05.555631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555747] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.555984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.556995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.557965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.558946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.559998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.780 [2024-07-23 13:55:05.560627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.560999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.561990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.562987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.563986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.564983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.781 [2024-07-23 13:55:05.565427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.565914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.566981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.567967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.568972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.569965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.782 [2024-07-23 13:55:05.570450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.570969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.571981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.572964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573882] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.573999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.574979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.575029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.575084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.575131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.575183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.575229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.783 [2024-07-23 13:55:05.575273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.575983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.784 [2024-07-23 13:55:05.576309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.576944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.577953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.578974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.784 [2024-07-23 13:55:05.579640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.579995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.580966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.581980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.582972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.583969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.785 [2024-07-23 13:55:05.584442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.584971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.585968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.586974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.587957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.588976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.786 [2024-07-23 13:55:05.589568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.589964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.590977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591128] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.591992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.592958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.593969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.594019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.594072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.594118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.594149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.787 [2024-07-23 13:55:05.594183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.594980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.595965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596701] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.596993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.597969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.788 [2024-07-23 13:55:05.598496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.598541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.598594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.598871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.598922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.598952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.598992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.599967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.600982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.601991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.602987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.789 [2024-07-23 13:55:05.603680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.603995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.604901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.605977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606498] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.606996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.607970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.790 [2024-07-23 13:55:05.608728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.608771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.608812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.608852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.608888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.608933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.608978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.609970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.610992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.611989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.612981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.791 [2024-07-23 13:55:05.613917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.613970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.614994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.615978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.616995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.617986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618900] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.618999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.792 [2024-07-23 13:55:05.619448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.619981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.620965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.621970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.622994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.623959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.624022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.624084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.793 [2024-07-23 13:55:05.624136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.624992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.625965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.626971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:14.794 [2024-07-23 13:55:05.627921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.627970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.628994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.794 [2024-07-23 13:55:05.629869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.629919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.629973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.630998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.631968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632630] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.632997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.633984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.634031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 [2024-07-23 13:55:05.634077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:14.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.795 13:55:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:14.795 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.083 [2024-07-23 13:55:05.840916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.840999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.841963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.842986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.843720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.083 [2024-07-23 13:55:05.844908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.844945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.844991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.845971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.846758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.847993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.848989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.084 [2024-07-23 13:55:05.849385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.849835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.850957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.851980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.852957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:15.085 [2024-07-23 13:55:05.853885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.853973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.085 [2024-07-23 13:55:05.854335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.854967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.855982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.856954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.857984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.086 [2024-07-23 13:55:05.858974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.859958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.860997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.861976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.862961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.863005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.087 [2024-07-23 13:55:05.863058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.863976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.864983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.865981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.866978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867182] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.088 [2024-07-23 13:55:05.867801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.867847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.867897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.867949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.867998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.868846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 13:55:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:15.089 [2024-07-23 13:55:05.869606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 13:55:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:15.089 [2024-07-23 13:55:05.869951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.869999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.870982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.871974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.089 [2024-07-23 13:55:05.872705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.872752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.872801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.872863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.872909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.872957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873908] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.873951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.874976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875896] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.875999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.876995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.090 [2024-07-23 13:55:05.877389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.877984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.878987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.879974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880732] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.880979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881279] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.881697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.091 [2024-07-23 13:55:05.882578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.882977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.883997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.884967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.885983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.092 [2024-07-23 13:55:05.886618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.886996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887750] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.887980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.888985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.889981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.890968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.093 [2024-07-23 13:55:05.891744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.891788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.891827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.891867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.891906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.891947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.891977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892057] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.892987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.893979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894038] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.894983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895285] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.895962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.094 [2024-07-23 13:55:05.896542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.896988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.897795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.898983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.899984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.900964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.901324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.901361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.901403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.901450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.095 [2024-07-23 13:55:05.901500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.901986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.902986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.903987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.904988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:15.096 [2024-07-23 13:55:05.905084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.096 [2024-07-23 13:55:05.905665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.905980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.906966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.907962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.908991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.909995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.097 [2024-07-23 13:55:05.910394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.910444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.910494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.910534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.910582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.910635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.910688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.911970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912179] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.912963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.913791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.914972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915367] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.098 [2024-07-23 13:55:05.915457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915897] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.915991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.916993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.917958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.918987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919822] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.919989] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.920031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.099 [2024-07-23 13:55:05.920078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.920957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.921981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.922997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.923991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.924998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.925028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.100 [2024-07-23 13:55:05.925081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.925953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.926965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927627] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927779] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.927967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.928961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.929002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.929041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.929078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.929110] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.101 [2024-07-23 13:55:05.929157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929207] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929599] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.929693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930155] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.930979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.931972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932335] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.932849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.933983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.934033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.934090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.934139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.934194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.102 [2024-07-23 13:55:05.934238] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.934959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.935962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.936967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937231] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937941] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.937980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.103 [2024-07-23 13:55:05.938904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.938950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.938996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939244] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.939992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.940987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941271] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941456] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.941987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942020] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.942991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.104 [2024-07-23 13:55:05.943898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.943950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.943999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.944964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945212] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.945963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946211] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.946960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.947961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.105 [2024-07-23 13:55:05.948357] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948698] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.948744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.949995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950351] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.950959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.951889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952500] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952783] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.106 [2024-07-23 13:55:05.952988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953194] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953739] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.953977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954151] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.954984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:15.107 [2024-07-23 13:55:05.955448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955657] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.955977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956263] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956400] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.956958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957347] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.107 [2024-07-23 13:55:05.957584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957849] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.957950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.958994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959099] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959145] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.959960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.960974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.961994] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962144] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.108 [2024-07-23 13:55:05.962640] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.962962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963694] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.963987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.964973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965683] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.965981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.966995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967088] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.109 [2024-07-23 13:55:05.967324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.967685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.968986] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969132] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.969962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970681] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.970916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971302] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.110 [2024-07-23 13:55:05.971662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.971713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.971766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.971818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.971870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.971924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.971976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.972964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973381] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.973951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974508] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.974981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975240] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975322] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.975973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976172] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976257] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.111 [2024-07-23 13:55:05.976461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976561] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.976978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977029] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.977975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978431] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978528] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978786] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.978990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979195] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979777] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.979979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.980706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.112 [2024-07-23 13:55:05.981719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.981762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.981806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.981845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.981895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.981942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.981988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982795] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.982946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.983933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.984959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.985956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.113 [2024-07-23 13:55:05.986534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.986954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.987961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988748] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.988988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989708] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989856] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.989963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990009] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.990949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991470] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991711] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991760] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.114 [2024-07-23 13:55:05.991855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.991902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.991950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.991998] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.992965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993036] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993159] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993632] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.993828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994540] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.994960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995166] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995247] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995469] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995594] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995733] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.115 [2024-07-23 13:55:05.995831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.995879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.995930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.995983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.996985] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997596] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997676] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997749] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997899] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.997979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998375] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998548] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998662] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998807] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998852] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.998948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999108] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999644] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:05.999991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000121] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000516] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.116 [2024-07-23 13:55:06.000877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.000927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.000980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001083] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001482] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.001975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002010] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002453] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002495] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002665] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.002965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003326] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003373] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.003957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004090] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:15.117 [2024-07-23 13:55:06.004723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.004995] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005509] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.117 [2024-07-23 13:55:06.005664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.005719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.005771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.005820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.005866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.005918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.005959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006041] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006324] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006368] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.006656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007392] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007588] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007638] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007738] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.007975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008198] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.008956] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009117] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009415] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009566] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009615] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009913] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.009962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010481] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.118 [2024-07-23 13:55:06.010764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.010813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.010863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.010917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.010966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011444] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011713] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011815] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.011970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012123] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.012960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013714] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013921] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.013970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014223] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014264] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014582] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014663] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.119 [2024-07-23 13:55:06.014790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.014820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.014864] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.014910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.014947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.014977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015352] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015619] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015668] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015868] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.015967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016127] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016176] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016819] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.016974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017251] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017300] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.017997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018098] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018242] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018290] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018589] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018621] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.018991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019371] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.120 [2024-07-23 13:55:06.019797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.019839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.019883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.019932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.019978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020181] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020283] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020342] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020487] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020787] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.020984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021331] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021960] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.021991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022292] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022485] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022574] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.022672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023389] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023724] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023847] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.023999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024107] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024310] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024455] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.121 [2024-07-23 13:55:06.024597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024848] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024887] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024965] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.024999] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025055] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025105] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025150] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025250] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025642] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.025854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026307] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026366] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026818] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.026967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027120] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027167] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.027991] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028413] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.028830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.029200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.029254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.029299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.029350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.029401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.122 [2024-07-23 13:55:06.029447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029802] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.029957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030007] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030200] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 true 00:13:15.123 [2024-07-23 13:55:06.030349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030675] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030910] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.030972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031169] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031227] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031258] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.031673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032584] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032791] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.032967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033435] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033496] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033537] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.123 [2024-07-23 13:55:06.033811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.033863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.033918] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.033964] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034067] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034457] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.034951] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035005] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035356] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035524] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035563] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035646] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035872] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.035987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036239] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036291] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036384] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036426] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036493] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036527] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036564] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.036983] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037230] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037515] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037800] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.124 [2024-07-23 13:55:06.037883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.037915] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.037947] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.037992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038420] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.038981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039610] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039737] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039938] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.039979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040046] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040093] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040329] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040529] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040728] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040880] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.040988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041085] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041604] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041659] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041765] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.041966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042015] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042074] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042446] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042477] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042553] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042734] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042772] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042817] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042863] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042925] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.042969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.043011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.125 [2024-07-23 13:55:06.043045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043143] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043191] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043391] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043488] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043538] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043686] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.043988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044396] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044853] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.044957] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045004] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045113] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045249] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045295] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045441] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045542] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045789] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.045970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046003] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046130] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046269] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046423] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046530] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046828] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.046978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047075] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047592] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.047688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048053] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048314] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048414] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048471] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.126 [2024-07-23 13:55:06.048618] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048664] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048703] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048850] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048893] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.048975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049059] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049298] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049376] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049661] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.049968] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050422] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050560] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050809] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.050879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051284] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051387] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051837] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051885] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.051978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052220] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052266] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052318] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.127 [2024-07-23 13:55:06.052399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052437] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052468] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052502] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052545] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052677] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052797] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052867] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052898] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.052977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053076] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053174] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053218] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053323] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053370] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053452] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053499] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053576] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053639] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.053975] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054031] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 13:55:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:15.128 [2024-07-23 13:55:06.054383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054443] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 13:55:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.128 [2024-07-23 13:55:06.054491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:15.128 [2024-07-23 13:55:06.054838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054937] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.054990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055039] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055129] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055171] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055451] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055494] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055645] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055690] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055726] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055764] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.055997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056056] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056152] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056253] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.128 [2024-07-23 13:55:06.056405] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056606] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056652] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056705] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056756] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056808] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056895] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.056996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057095] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057650] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057692] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057842] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057944] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.057992] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058353] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058697] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058790] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058881] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058952] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.058990] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059030] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059122] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059277] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059316] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059348] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059512] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059754] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.059972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060201] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060275] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060408] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060907] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.060966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.061013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.129 [2024-07-23 13:55:06.061063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061160] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061254] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061299] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061395] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061523] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061710] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061792] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061920] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061958] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.061993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062035] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062082] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062153] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062205] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062304] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062450] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062501] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062547] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062643] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062743] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062832] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062923] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.062976] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063024] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063078] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063272] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063406] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063449] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063492] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063934] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.063978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064017] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064058] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064188] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064583] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064634] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064782] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064833] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.064981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065033] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065332] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065364] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065526] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065624] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.130 [2024-07-23 13:55:06.065706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.065744] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.065778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.065824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.065869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.065916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.065974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066022] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066131] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066180] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066402] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066433] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066626] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.066785] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067147] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067193] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067297] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067345] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067879] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.067981] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068027] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068061] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068096] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068136] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068276] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068325] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068369] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068403] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068478] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068517] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068585] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068741] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068891] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068945] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.068993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069112] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069261] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069306] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069510] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069601] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069755] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069806] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069846] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.069972] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070065] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070425] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070472] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070535] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.131 [2024-07-23 13:55:06.070685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.070736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.070788] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.070840] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.070892] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.070943] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.070993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071141] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071233] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071280] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071328] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071421] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.132 [2024-07-23 13:55:06.071598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.396 [2024-07-23 13:55:06.071647] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.396 [2024-07-23 13:55:06.071687] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.396 [2024-07-23 13:55:06.071727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.396 [2024-07-23 13:55:06.071768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.396 [2024-07-23 13:55:06.071798] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.396 [2024-07-23 13:55:06.071831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.071871] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.071911] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.071953] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.071996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072045] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072199] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072278] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072336] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072383] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072671] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072715] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072757] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072801] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072839] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072869] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.072993] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073070] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073149] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073549] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073654] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073702] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073753] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073799] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073894] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073948] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.073997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074051] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074092] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074134] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074213] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074256] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074358] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074438] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074520] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074559] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074590] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074636] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074674] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074709] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074838] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.074962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075086] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075162] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075259] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075311] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075416] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075466] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075570] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075774] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075820] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075914] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.075963] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076114] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076313] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.397 [2024-07-23 13:55:06.076736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.076784] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.076835] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.076874] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.076904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.076936] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.076978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077079] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077173] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077222] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077382] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077479] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077532] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077579] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077718] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077931] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.077978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078025] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078124] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078203] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078246] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078330] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078361] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078397] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078439] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078489] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078536] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078578] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078682] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078805] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078865] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.078959] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079410] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079567] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079612] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079693] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079723] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079759] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079843] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079890] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079930] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.079962] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080006] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080052] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080084] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080210] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080447] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080555] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080651] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080841] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080886] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080933] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.080982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081071] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081102] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081138] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081178] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081228] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081354] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.398 [2024-07-23 13:55:06.081398] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081608] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081751] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081803] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081854] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.081955] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082001] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082104] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082267] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082377] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082474] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082518] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082616] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082655] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082699] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.082763] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083023] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083237] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083288] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083614] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083669] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083924] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.083973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084021] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084073] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084170] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084225] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084320] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084379] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084417] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084462] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084506] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084552] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084598] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084700] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084742] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084939] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.084982] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085026] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085097] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085197] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085296] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085343] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085773] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085919] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.085966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086008] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086050] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086126] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086202] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086252] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086341] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.399 [2024-07-23 13:55:06.086380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086428] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086581] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086629] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086679] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086725] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086778] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086928] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.086973] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087018] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087219] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087268] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087319] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087360] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087432] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087461] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087511] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087554] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087685] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087722] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087761] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087793] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087855] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087884] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087940] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087969] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.087996] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088080] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088125] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088156] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088186] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088274] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088303] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088459] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088507] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088558] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088605] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088707] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088758] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088858] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.088909] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089282] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089327] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089418] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089460] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089544] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089586] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089678] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089762] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089830] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089860] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089889] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089917] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089949] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.089977] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090019] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090063] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090103] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090146] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090175] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090204] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090232] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090289] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090317] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090346] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090374] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090490] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090521] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.400 [2024-07-23 13:55:06.090551] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090580] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090609] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090637] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090666] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090736] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090781] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090831] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090878] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.090978] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091028] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091463] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091573] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091625] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091670] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091720] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091767] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091816] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.091971] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092068] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092163] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092262] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092349] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092404] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092445] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092513] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092556] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092603] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092680] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092727] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092829] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092876] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092929] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.092987] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093032] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093187] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093241] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093338] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093394] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093442] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093491] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093546] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093595] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093641] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093695] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093794] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093836] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093883] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.093980] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094037] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094087] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094137] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094229] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094281] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094386] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.401 [2024-07-23 13:55:06.094543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.094812] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.094857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.094888] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.094922] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.094961] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095002] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095049] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095101] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095142] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095183] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095224] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095270] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095309] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095340] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095424] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095473] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095503] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095533] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095575] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095617] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095719] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095771] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095821] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095866] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095912] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.095967] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096011] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096069] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096116] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096164] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096208] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096305] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096355] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096399] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096483] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096525] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096557] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096593] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096635] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096704] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096735] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096766] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096825] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.096966] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097064] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097115] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097161] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097216] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097312] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097363] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097411] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097465] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097810] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097844] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097875] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097916] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.097997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098034] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098081] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098168] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098226] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098344] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098372] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098401] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098429] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098458] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098486] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098514] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098543] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098571] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098600] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098628] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098656] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098712] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098796] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098826] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098904] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.402 [2024-07-23 13:55:06.098935] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.098974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099066] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099100] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099190] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099235] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099337] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099385] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099430] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099480] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099531] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099577] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099623] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099667] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099814] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099861] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.099984] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.100014] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.100048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.100077] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.100106] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.100135] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.100165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.101845] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.101901] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.101950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.101997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102054] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102109] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102158] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102209] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102255] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102308] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102359] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102407] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102454] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102504] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102550] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102597] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102649] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102706] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102752] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102851] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102905] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102950] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.102997] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103040] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103094] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103140] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103189] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103234] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103287] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103388] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103434] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103476] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103565] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103653] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103696] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103740] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103770] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103811] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103859] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103906] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.103954] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104000] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104048] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104091] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104133] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104185] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104215] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104245] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104286] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104334] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104378] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104427] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104522] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104568] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104613] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104672] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104721] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104769] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104824] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104870] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104927] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 [2024-07-23 13:55:06.104970] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.403 Message suppressed 999 times: Read completed with error (sct=0, sc=15) 00:13:15.403 [2024-07-23 13:55:06.105016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105072] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105118] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105165] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105217] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105365] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105412] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105467] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105519] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105569] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105611] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105648] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105691] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105731] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105768] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105813] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105857] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105946] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.105979] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106012] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106060] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106154] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106184] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106214] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106243] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106273] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106315] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106622] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106673] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106730] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106775] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106823] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106873] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106903] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106932] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.106974] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107016] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107062] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107111] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107157] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107192] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107221] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107260] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107301] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107333] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107362] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107390] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107419] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107448] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107475] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107505] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107534] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107562] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107591] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107620] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107658] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107689] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107717] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107746] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107776] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107804] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107834] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107862] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107902] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107942] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.107988] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108047] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108089] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108119] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108148] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108177] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108206] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108236] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108265] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108293] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108321] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108350] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108380] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108409] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108436] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108464] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108497] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108541] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108572] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108602] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108631] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108660] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108688] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108716] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.108745] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.109139] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.109196] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.404 [2024-07-23 13:55:06.109248] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109294] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109339] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109393] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109440] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109484] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109539] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109587] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109633] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109684] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109729] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109780] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109827] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109877] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.109926] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:15.405 [2024-07-23 13:55:06.110013] ctrlr_bdev.c: 298:nvmf_bdev_ctrlr_read_cmd: *ERROR*: Read NLB 1 * block size 512 > SGL length 1 00:13:16.343 13:55:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.343 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:16.343 13:55:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:16.343 13:55:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:16.603 true 00:13:16.603 13:55:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:16.603 13:55:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.544 13:55:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.544 13:55:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:17.544 13:55:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:17.803 true 00:13:17.803 13:55:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:17.803 13:55:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.062 13:55:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.062 13:55:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:18.321 13:55:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:18.321 true 00:13:18.321 13:55:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:18.321 13:55:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.701 13:55:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.701 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:19.701 13:55:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:19.701 13:55:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:19.960 true 00:13:19.960 13:55:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:19.960 13:55:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.899 13:55:11 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.899 13:55:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:20.899 13:55:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:21.159 true 00:13:21.159 13:55:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:21.159 13:55:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.159 13:55:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.418 13:55:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:21.418 13:55:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:21.677 true 00:13:21.677 13:55:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:21.677 13:55:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.677 13:55:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.936 13:55:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:21.937 13:55:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:22.195 true 00:13:22.195 13:55:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:22.195 13:55:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.195 13:55:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.462 13:55:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:22.462 13:55:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:22.721 true 00:13:22.721 13:55:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:22.721 13:55:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.099 13:55:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.099 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:24.099 13:55:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:24.099 13:55:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:24.099 true 00:13:24.099 13:55:15 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:24.099 13:55:15 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.036 13:55:15 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.294 13:55:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:25.294 13:55:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:25.294 true 00:13:25.294 13:55:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:25.294 13:55:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.553 13:55:16 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.811 13:55:16 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:25.811 13:55:16 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:25.811 true 00:13:25.811 13:55:16 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:25.812 13:55:16 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.190 13:55:17 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.190 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:27.190 13:55:18 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:27.190 13:55:18 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:27.448 true 00:13:27.448 13:55:18 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:27.448 13:55:18 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.384 13:55:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.384 13:55:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:28.384 13:55:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:28.642 true 00:13:28.642 13:55:19 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:28.642 13:55:19 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.900 13:55:19 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.900 13:55:19 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:28.900 13:55:19 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:29.159 true 00:13:29.159 13:55:20 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:29.159 13:55:20 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.537 13:55:21 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:30.537 13:55:21 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:30.537 13:55:21 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:30.796 true 00:13:30.796 13:55:21 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:30.796 13:55:21 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.731 13:55:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.731 13:55:22 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:31.731 13:55:22 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:31.989 true 00:13:31.989 13:55:22 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:31.989 13:55:22 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.989 13:55:22 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.247 13:55:23 -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:32.247 13:55:23 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:32.506 true 00:13:32.506 13:55:23 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:32.506 13:55:23 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.442 Initializing NVMe Controllers 00:13:33.442 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:33.442 Controller IO queue size 128, less than required. 00:13:33.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.442 Controller IO queue size 128, less than required. 00:13:33.442 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:33.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:33.442 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:33.442 Initialization complete. Launching workers. 00:13:33.442 ======================================================== 00:13:33.442 Latency(us) 00:13:33.442 Device Information : IOPS MiB/s Average min max 00:13:33.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2357.23 1.15 36500.97 2161.41 1083047.55 00:13:33.442 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17821.20 8.70 7182.40 2152.80 377322.63 00:13:33.442 ======================================================== 00:13:33.442 Total : 20178.43 9.85 10607.38 2152.80 1083047.55 00:13:33.442 00:13:33.442 13:55:24 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.701 13:55:24 -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:13:33.701 13:55:24 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:33.960 true 00:13:33.960 13:55:24 -- target/ns_hotplug_stress.sh@44 -- # kill -0 3189992 00:13:33.960 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3189992) - No such process 00:13:33.960 13:55:24 -- target/ns_hotplug_stress.sh@53 -- # wait 3189992 00:13:33.960 13:55:24 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.960 13:55:24 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.218 13:55:25 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:34.218 13:55:25 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:34.218 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:34.218 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:34.218 13:55:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:34.477 null0 00:13:34.477 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:34.477 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:34.477 13:55:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:34.477 null1 00:13:34.477 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:34.477 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:34.477 13:55:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:34.735 null2 00:13:34.735 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:34.735 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:34.735 13:55:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:34.995 null3 00:13:34.995 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:34.995 13:55:25 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:34.995 13:55:25 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:34.995 null4 00:13:34.995 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:34.995 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.254 13:55:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:35.254 null5 00:13:35.254 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.254 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.254 13:55:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:35.513 null6 00:13:35.513 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.513 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.513 13:55:26 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:35.513 null7 00:13:35.773 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:35.773 13:55:26 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:35.773 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@66 -- # wait 3195624 3195626 3195629 3195632 3195635 3195638 3195641 3195643 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:35.774 13:55:26 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.034 13:55:26 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.323 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.324 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.584 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:36.844 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.104 13:55:27 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.104 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.364 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.624 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:37.884 13:55:28 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.144 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.145 13:55:28 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.145 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.405 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:38.665 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:38.925 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:38.926 13:55:29 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:39.186 13:55:30 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:39.445 13:55:30 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:39.445 13:55:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:39.445 13:55:30 -- nvmf/common.sh@116 -- # sync 00:13:39.445 13:55:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:39.445 13:55:30 -- nvmf/common.sh@119 -- # set +e 00:13:39.445 13:55:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:39.445 13:55:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:39.445 rmmod nvme_tcp 00:13:39.445 rmmod nvme_fabrics 00:13:39.445 rmmod nvme_keyring 00:13:39.445 13:55:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:39.445 13:55:30 -- nvmf/common.sh@123 -- # set -e 00:13:39.445 13:55:30 -- nvmf/common.sh@124 -- # return 0 00:13:39.445 13:55:30 -- nvmf/common.sh@477 -- # '[' -n 3189550 ']' 00:13:39.445 13:55:30 -- nvmf/common.sh@478 -- # killprocess 3189550 00:13:39.445 13:55:30 -- common/autotest_common.sh@926 -- # '[' -z 3189550 ']' 00:13:39.445 13:55:30 -- common/autotest_common.sh@930 -- # kill -0 3189550 00:13:39.445 13:55:30 -- common/autotest_common.sh@931 -- # uname 00:13:39.445 13:55:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:39.445 13:55:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3189550 00:13:39.445 13:55:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:39.445 13:55:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:39.445 13:55:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3189550' 00:13:39.445 killing process with pid 3189550 00:13:39.445 13:55:30 -- common/autotest_common.sh@945 -- # kill 3189550 00:13:39.445 13:55:30 -- common/autotest_common.sh@950 -- # wait 3189550 00:13:39.706 13:55:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:39.706 13:55:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:39.706 13:55:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:39.706 13:55:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:39.706 13:55:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:39.706 13:55:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:39.706 13:55:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:39.706 13:55:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.616 13:55:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:41.616 00:13:41.616 real 0m46.554s 00:13:41.616 user 3m6.506s 00:13:41.616 sys 0m14.518s 00:13:41.616 13:55:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.616 13:55:32 -- common/autotest_common.sh@10 -- # set +x 00:13:41.616 ************************************ 00:13:41.616 END TEST nvmf_ns_hotplug_stress 00:13:41.616 ************************************ 00:13:41.876 13:55:32 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:41.876 13:55:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:41.876 13:55:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.876 13:55:32 -- common/autotest_common.sh@10 -- # set +x 00:13:41.876 ************************************ 00:13:41.876 START TEST nvmf_connect_stress 00:13:41.876 ************************************ 00:13:41.876 13:55:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:41.876 * Looking for test storage... 00:13:41.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.876 13:55:32 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.876 13:55:32 -- nvmf/common.sh@7 -- # uname -s 00:13:41.876 13:55:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.876 13:55:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.876 13:55:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.876 13:55:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.876 13:55:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.876 13:55:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.876 13:55:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.876 13:55:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.877 13:55:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.877 13:55:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.877 13:55:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:41.877 13:55:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:41.877 13:55:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.877 13:55:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.877 13:55:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.877 13:55:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.877 13:55:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.877 13:55:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.877 13:55:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.877 13:55:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.877 13:55:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.877 13:55:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.877 13:55:32 -- paths/export.sh@5 -- # export PATH 00:13:41.877 13:55:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.877 13:55:32 -- nvmf/common.sh@46 -- # : 0 00:13:41.877 13:55:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:41.877 13:55:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:41.877 13:55:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:41.877 13:55:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.877 13:55:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.877 13:55:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:41.877 13:55:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:41.877 13:55:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:41.877 13:55:32 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:41.877 13:55:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:41.877 13:55:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.877 13:55:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:41.877 13:55:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:41.877 13:55:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:41.877 13:55:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.877 13:55:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.877 13:55:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.877 13:55:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:41.877 13:55:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:41.877 13:55:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:41.877 13:55:32 -- common/autotest_common.sh@10 -- # set +x 00:13:47.157 13:55:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:47.157 13:55:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:47.157 13:55:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:47.157 13:55:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:47.157 13:55:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:47.157 13:55:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:47.157 13:55:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:47.157 13:55:37 -- nvmf/common.sh@294 -- # net_devs=() 00:13:47.157 13:55:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:47.157 13:55:37 -- nvmf/common.sh@295 -- # e810=() 00:13:47.157 13:55:37 -- nvmf/common.sh@295 -- # local -ga e810 00:13:47.157 13:55:37 -- nvmf/common.sh@296 -- # x722=() 00:13:47.157 13:55:37 -- nvmf/common.sh@296 -- # local -ga x722 00:13:47.157 13:55:37 -- nvmf/common.sh@297 -- # mlx=() 00:13:47.157 13:55:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:47.157 13:55:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.157 13:55:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:47.157 13:55:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:47.157 13:55:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:47.157 13:55:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.157 13:55:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:47.157 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:47.157 13:55:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:47.157 13:55:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:47.157 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:47.157 13:55:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:47.157 13:55:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:47.157 13:55:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.157 13:55:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.157 13:55:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.157 13:55:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.157 13:55:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:47.157 Found net devices under 0000:86:00.0: cvl_0_0 00:13:47.157 13:55:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.157 13:55:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:47.158 13:55:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.158 13:55:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:47.158 13:55:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.158 13:55:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:47.158 Found net devices under 0000:86:00.1: cvl_0_1 00:13:47.158 13:55:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.158 13:55:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:47.158 13:55:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:47.158 13:55:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:47.158 13:55:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:47.158 13:55:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:47.158 13:55:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.158 13:55:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.158 13:55:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.158 13:55:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:47.158 13:55:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.158 13:55:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.158 13:55:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:47.158 13:55:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.158 13:55:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.158 13:55:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:47.158 13:55:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:47.158 13:55:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.158 13:55:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.158 13:55:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.158 13:55:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.158 13:55:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:47.158 13:55:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.418 13:55:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.418 13:55:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.418 13:55:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:47.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:13:47.418 00:13:47.418 --- 10.0.0.2 ping statistics --- 00:13:47.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.418 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:13:47.418 13:55:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:13:47.418 00:13:47.418 --- 10.0.0.1 ping statistics --- 00:13:47.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.418 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:13:47.418 13:55:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.418 13:55:38 -- nvmf/common.sh@410 -- # return 0 00:13:47.418 13:55:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.418 13:55:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.418 13:55:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.418 13:55:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.418 13:55:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.418 13:55:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.418 13:55:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.418 13:55:38 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:47.418 13:55:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.418 13:55:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:47.418 13:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:47.418 13:55:38 -- nvmf/common.sh@469 -- # nvmfpid=3199854 00:13:47.419 13:55:38 -- nvmf/common.sh@470 -- # waitforlisten 3199854 00:13:47.419 13:55:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:47.419 13:55:38 -- common/autotest_common.sh@819 -- # '[' -z 3199854 ']' 00:13:47.419 13:55:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.419 13:55:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:47.419 13:55:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.419 13:55:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:47.419 13:55:38 -- common/autotest_common.sh@10 -- # set +x 00:13:47.419 [2024-07-23 13:55:38.337590] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:47.419 [2024-07-23 13:55:38.337635] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.419 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.419 [2024-07-23 13:55:38.395570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:47.679 [2024-07-23 13:55:38.466656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.679 [2024-07-23 13:55:38.466764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.679 [2024-07-23 13:55:38.466771] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.679 [2024-07-23 13:55:38.466776] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.679 [2024-07-23 13:55:38.466871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:47.679 [2024-07-23 13:55:38.466937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:47.679 [2024-07-23 13:55:38.466938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.248 13:55:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.248 13:55:39 -- common/autotest_common.sh@852 -- # return 0 00:13:48.248 13:55:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:48.248 13:55:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:48.248 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.248 13:55:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.248 13:55:39 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.248 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.248 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.248 [2024-07-23 13:55:39.184404] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.248 13:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.248 13:55:39 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.248 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.248 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.248 13:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.248 13:55:39 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.248 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.248 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.248 [2024-07-23 13:55:39.218174] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.248 13:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.248 13:55:39 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.248 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.248 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.248 NULL1 00:13:48.248 13:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.248 13:55:39 -- target/connect_stress.sh@21 -- # PERF_PID=3200106 00:13:48.248 13:55:39 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.248 13:55:39 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:48.248 13:55:39 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:48.248 13:55:39 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:48.248 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.248 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.248 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.248 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.248 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.248 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.248 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.248 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.248 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.248 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:48.508 13:55:39 -- target/connect_stress.sh@28 -- # cat 00:13:48.508 13:55:39 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:48.508 13:55:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.508 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.508 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:48.768 13:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.768 13:55:39 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:48.768 13:55:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.768 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.768 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:49.028 13:55:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.028 13:55:39 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:49.028 13:55:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.028 13:55:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.028 13:55:39 -- common/autotest_common.sh@10 -- # set +x 00:13:49.288 13:55:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.288 13:55:40 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:49.288 13:55:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.288 13:55:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.288 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:49.857 13:55:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:49.857 13:55:40 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:49.857 13:55:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.857 13:55:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:49.857 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:50.117 13:55:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.117 13:55:40 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:50.117 13:55:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.117 13:55:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.117 13:55:40 -- common/autotest_common.sh@10 -- # set +x 00:13:50.376 13:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.376 13:55:41 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:50.376 13:55:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.376 13:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.376 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:13:50.636 13:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:50.636 13:55:41 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:50.636 13:55:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.636 13:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:50.636 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:13:50.895 13:55:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.155 13:55:41 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:51.155 13:55:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.155 13:55:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.155 13:55:41 -- common/autotest_common.sh@10 -- # set +x 00:13:51.415 13:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.415 13:55:42 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:51.415 13:55:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.415 13:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.415 13:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:51.674 13:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.674 13:55:42 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:51.675 13:55:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.675 13:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.675 13:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:51.933 13:55:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:51.933 13:55:42 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:51.933 13:55:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.933 13:55:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:51.933 13:55:42 -- common/autotest_common.sh@10 -- # set +x 00:13:52.502 13:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.502 13:55:43 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:52.502 13:55:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.502 13:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.502 13:55:43 -- common/autotest_common.sh@10 -- # set +x 00:13:52.786 13:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.786 13:55:43 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:52.786 13:55:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.786 13:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.786 13:55:43 -- common/autotest_common.sh@10 -- # set +x 00:13:53.044 13:55:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.044 13:55:43 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:53.044 13:55:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.044 13:55:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.044 13:55:43 -- common/autotest_common.sh@10 -- # set +x 00:13:53.302 13:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.302 13:55:44 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:53.302 13:55:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.302 13:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.302 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:53.561 13:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.561 13:55:44 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:53.561 13:55:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.561 13:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.561 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:53.820 13:55:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:53.820 13:55:44 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:53.820 13:55:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.820 13:55:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:53.820 13:55:44 -- common/autotest_common.sh@10 -- # set +x 00:13:54.389 13:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.389 13:55:45 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:54.389 13:55:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.389 13:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.389 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:13:54.648 13:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.648 13:55:45 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:54.648 13:55:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.648 13:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.648 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:13:54.907 13:55:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:54.907 13:55:45 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:54.907 13:55:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.907 13:55:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.907 13:55:45 -- common/autotest_common.sh@10 -- # set +x 00:13:55.166 13:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.166 13:55:46 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:55.166 13:55:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.166 13:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.166 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:55.733 13:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.733 13:55:46 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:55.733 13:55:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.733 13:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.733 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:55.991 13:55:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.991 13:55:46 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:55.991 13:55:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.991 13:55:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:55.991 13:55:46 -- common/autotest_common.sh@10 -- # set +x 00:13:56.250 13:55:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.250 13:55:47 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:56.250 13:55:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.250 13:55:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.250 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:13:56.509 13:55:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.509 13:55:47 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:56.509 13:55:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.509 13:55:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.509 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:13:56.768 13:55:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.768 13:55:47 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:56.768 13:55:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.768 13:55:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.768 13:55:47 -- common/autotest_common.sh@10 -- # set +x 00:13:57.337 13:55:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.337 13:55:48 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:57.337 13:55:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.337 13:55:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.337 13:55:48 -- common/autotest_common.sh@10 -- # set +x 00:13:57.597 13:55:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.597 13:55:48 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:57.597 13:55:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.597 13:55:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.597 13:55:48 -- common/autotest_common.sh@10 -- # set +x 00:13:57.856 13:55:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:57.856 13:55:48 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:57.856 13:55:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:57.856 13:55:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:57.856 13:55:48 -- common/autotest_common.sh@10 -- # set +x 00:13:58.116 13:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.116 13:55:49 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:58.116 13:55:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:58.116 13:55:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:58.116 13:55:49 -- common/autotest_common.sh@10 -- # set +x 00:13:58.376 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:58.376 13:55:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:58.376 13:55:49 -- target/connect_stress.sh@34 -- # kill -0 3200106 00:13:58.376 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3200106) - No such process 00:13:58.376 13:55:49 -- target/connect_stress.sh@38 -- # wait 3200106 00:13:58.376 13:55:49 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:58.376 13:55:49 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:58.376 13:55:49 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:58.376 13:55:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:58.376 13:55:49 -- nvmf/common.sh@116 -- # sync 00:13:58.636 13:55:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:58.636 13:55:49 -- nvmf/common.sh@119 -- # set +e 00:13:58.636 13:55:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:58.636 13:55:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:58.636 rmmod nvme_tcp 00:13:58.636 rmmod nvme_fabrics 00:13:58.636 rmmod nvme_keyring 00:13:58.636 13:55:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:58.636 13:55:49 -- nvmf/common.sh@123 -- # set -e 00:13:58.636 13:55:49 -- nvmf/common.sh@124 -- # return 0 00:13:58.636 13:55:49 -- nvmf/common.sh@477 -- # '[' -n 3199854 ']' 00:13:58.636 13:55:49 -- nvmf/common.sh@478 -- # killprocess 3199854 00:13:58.636 13:55:49 -- common/autotest_common.sh@926 -- # '[' -z 3199854 ']' 00:13:58.636 13:55:49 -- common/autotest_common.sh@930 -- # kill -0 3199854 00:13:58.636 13:55:49 -- common/autotest_common.sh@931 -- # uname 00:13:58.636 13:55:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:58.636 13:55:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3199854 00:13:58.636 13:55:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:58.636 13:55:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:58.636 13:55:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3199854' 00:13:58.636 killing process with pid 3199854 00:13:58.636 13:55:49 -- common/autotest_common.sh@945 -- # kill 3199854 00:13:58.636 13:55:49 -- common/autotest_common.sh@950 -- # wait 3199854 00:13:58.896 13:55:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:58.896 13:55:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:58.896 13:55:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:58.896 13:55:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.896 13:55:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:58.896 13:55:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.896 13:55:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.896 13:55:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.806 13:55:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:00.806 00:14:00.806 real 0m19.121s 00:14:00.806 user 0m40.863s 00:14:00.806 sys 0m8.145s 00:14:00.806 13:55:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.806 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:14:00.806 ************************************ 00:14:00.806 END TEST nvmf_connect_stress 00:14:00.806 ************************************ 00:14:01.066 13:55:51 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:01.066 13:55:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:01.066 13:55:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:01.066 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:14:01.066 ************************************ 00:14:01.066 START TEST nvmf_fused_ordering 00:14:01.066 ************************************ 00:14:01.066 13:55:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:01.066 * Looking for test storage... 00:14:01.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:01.066 13:55:51 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:01.066 13:55:51 -- nvmf/common.sh@7 -- # uname -s 00:14:01.066 13:55:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:01.066 13:55:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:01.066 13:55:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:01.067 13:55:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:01.067 13:55:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:01.067 13:55:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:01.067 13:55:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:01.067 13:55:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:01.067 13:55:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:01.067 13:55:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:01.067 13:55:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:01.067 13:55:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:01.067 13:55:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:01.067 13:55:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:01.067 13:55:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:01.067 13:55:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:01.067 13:55:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:01.067 13:55:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:01.067 13:55:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:01.067 13:55:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.067 13:55:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.067 13:55:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.067 13:55:51 -- paths/export.sh@5 -- # export PATH 00:14:01.067 13:55:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:01.067 13:55:51 -- nvmf/common.sh@46 -- # : 0 00:14:01.067 13:55:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:01.067 13:55:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:01.067 13:55:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:01.067 13:55:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:01.067 13:55:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:01.067 13:55:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:01.067 13:55:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:01.067 13:55:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:01.067 13:55:51 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:01.067 13:55:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:01.067 13:55:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:01.067 13:55:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:01.067 13:55:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:01.067 13:55:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:01.067 13:55:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:01.067 13:55:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:01.067 13:55:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:01.067 13:55:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:01.067 13:55:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:01.067 13:55:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:01.067 13:55:51 -- common/autotest_common.sh@10 -- # set +x 00:14:06.343 13:55:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:06.343 13:55:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:06.343 13:55:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:06.343 13:55:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:06.343 13:55:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:06.343 13:55:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:06.343 13:55:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:06.343 13:55:57 -- nvmf/common.sh@294 -- # net_devs=() 00:14:06.344 13:55:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:06.344 13:55:57 -- nvmf/common.sh@295 -- # e810=() 00:14:06.344 13:55:57 -- nvmf/common.sh@295 -- # local -ga e810 00:14:06.344 13:55:57 -- nvmf/common.sh@296 -- # x722=() 00:14:06.344 13:55:57 -- nvmf/common.sh@296 -- # local -ga x722 00:14:06.344 13:55:57 -- nvmf/common.sh@297 -- # mlx=() 00:14:06.344 13:55:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:06.344 13:55:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.344 13:55:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:06.344 13:55:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:06.344 13:55:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:06.344 13:55:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.344 13:55:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:06.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:06.344 13:55:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:06.344 13:55:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:06.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:06.344 13:55:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:06.344 13:55:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.344 13:55:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.344 13:55:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.344 13:55:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.344 13:55:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:06.344 Found net devices under 0000:86:00.0: cvl_0_0 00:14:06.344 13:55:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.344 13:55:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:06.344 13:55:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.344 13:55:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:06.344 13:55:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.344 13:55:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:06.344 Found net devices under 0000:86:00.1: cvl_0_1 00:14:06.344 13:55:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.344 13:55:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:06.344 13:55:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:06.344 13:55:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:06.344 13:55:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.344 13:55:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.344 13:55:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.344 13:55:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:06.344 13:55:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.344 13:55:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.344 13:55:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:06.344 13:55:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.344 13:55:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.344 13:55:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:06.344 13:55:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:06.344 13:55:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.344 13:55:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.344 13:55:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.344 13:55:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.344 13:55:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:06.344 13:55:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.344 13:55:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.344 13:55:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.344 13:55:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:06.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:14:06.344 00:14:06.344 --- 10.0.0.2 ping statistics --- 00:14:06.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.344 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:06.344 13:55:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.309 ms 00:14:06.344 00:14:06.344 --- 10.0.0.1 ping statistics --- 00:14:06.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.344 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:14:06.344 13:55:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.344 13:55:57 -- nvmf/common.sh@410 -- # return 0 00:14:06.344 13:55:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.344 13:55:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.344 13:55:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.344 13:55:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.344 13:55:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.344 13:55:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.603 13:55:57 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:06.603 13:55:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.603 13:55:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:06.603 13:55:57 -- common/autotest_common.sh@10 -- # set +x 00:14:06.603 13:55:57 -- nvmf/common.sh@469 -- # nvmfpid=3205293 00:14:06.603 13:55:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:06.603 13:55:57 -- nvmf/common.sh@470 -- # waitforlisten 3205293 00:14:06.603 13:55:57 -- common/autotest_common.sh@819 -- # '[' -z 3205293 ']' 00:14:06.603 13:55:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.603 13:55:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.603 13:55:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.603 13:55:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.604 13:55:57 -- common/autotest_common.sh@10 -- # set +x 00:14:06.604 [2024-07-23 13:55:57.412157] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:06.604 [2024-07-23 13:55:57.412199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.604 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.604 [2024-07-23 13:55:57.470458] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.604 [2024-07-23 13:55:57.549444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.604 [2024-07-23 13:55:57.549549] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.604 [2024-07-23 13:55:57.549556] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.604 [2024-07-23 13:55:57.549562] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.604 [2024-07-23 13:55:57.549576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:07.539 13:55:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.539 13:55:58 -- common/autotest_common.sh@852 -- # return 0 00:14:07.539 13:55:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.539 13:55:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 13:55:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.539 13:55:58 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:07.539 13:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 [2024-07-23 13:55:58.237081] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.539 13:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.539 13:55:58 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:07.539 13:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 13:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.539 13:55:58 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:07.539 13:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 [2024-07-23 13:55:58.253189] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:07.539 13:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.539 13:55:58 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:07.539 13:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 NULL1 00:14:07.539 13:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.539 13:55:58 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:07.539 13:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 13:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.539 13:55:58 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:07.539 13:55:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:07.539 13:55:58 -- common/autotest_common.sh@10 -- # set +x 00:14:07.539 13:55:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:07.539 13:55:58 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:07.539 [2024-07-23 13:55:58.305865] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:07.539 [2024-07-23 13:55:58.305907] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3205325 ] 00:14:07.539 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.108 Attached to nqn.2016-06.io.spdk:cnode1 00:14:08.108 Namespace ID: 1 size: 1GB 00:14:08.108 fused_ordering(0) 00:14:08.108 fused_ordering(1) 00:14:08.108 fused_ordering(2) 00:14:08.108 fused_ordering(3) 00:14:08.108 fused_ordering(4) 00:14:08.108 fused_ordering(5) 00:14:08.108 fused_ordering(6) 00:14:08.108 fused_ordering(7) 00:14:08.108 fused_ordering(8) 00:14:08.108 fused_ordering(9) 00:14:08.108 fused_ordering(10) 00:14:08.108 fused_ordering(11) 00:14:08.108 fused_ordering(12) 00:14:08.108 fused_ordering(13) 00:14:08.108 fused_ordering(14) 00:14:08.108 fused_ordering(15) 00:14:08.108 fused_ordering(16) 00:14:08.108 fused_ordering(17) 00:14:08.108 fused_ordering(18) 00:14:08.108 fused_ordering(19) 00:14:08.108 fused_ordering(20) 00:14:08.108 fused_ordering(21) 00:14:08.108 fused_ordering(22) 00:14:08.108 fused_ordering(23) 00:14:08.108 fused_ordering(24) 00:14:08.108 fused_ordering(25) 00:14:08.108 fused_ordering(26) 00:14:08.108 fused_ordering(27) 00:14:08.108 fused_ordering(28) 00:14:08.108 fused_ordering(29) 00:14:08.108 fused_ordering(30) 00:14:08.108 fused_ordering(31) 00:14:08.108 fused_ordering(32) 00:14:08.108 fused_ordering(33) 00:14:08.108 fused_ordering(34) 00:14:08.108 fused_ordering(35) 00:14:08.108 fused_ordering(36) 00:14:08.108 fused_ordering(37) 00:14:08.108 fused_ordering(38) 00:14:08.108 fused_ordering(39) 00:14:08.108 fused_ordering(40) 00:14:08.108 fused_ordering(41) 00:14:08.108 fused_ordering(42) 00:14:08.108 fused_ordering(43) 00:14:08.108 fused_ordering(44) 00:14:08.108 fused_ordering(45) 00:14:08.108 fused_ordering(46) 00:14:08.108 fused_ordering(47) 00:14:08.108 fused_ordering(48) 00:14:08.108 fused_ordering(49) 00:14:08.108 fused_ordering(50) 00:14:08.108 fused_ordering(51) 00:14:08.108 fused_ordering(52) 00:14:08.108 fused_ordering(53) 00:14:08.108 fused_ordering(54) 00:14:08.108 fused_ordering(55) 00:14:08.108 fused_ordering(56) 00:14:08.108 fused_ordering(57) 00:14:08.108 fused_ordering(58) 00:14:08.108 fused_ordering(59) 00:14:08.108 fused_ordering(60) 00:14:08.108 fused_ordering(61) 00:14:08.108 fused_ordering(62) 00:14:08.108 fused_ordering(63) 00:14:08.108 fused_ordering(64) 00:14:08.108 fused_ordering(65) 00:14:08.108 fused_ordering(66) 00:14:08.108 fused_ordering(67) 00:14:08.108 fused_ordering(68) 00:14:08.108 fused_ordering(69) 00:14:08.108 fused_ordering(70) 00:14:08.108 fused_ordering(71) 00:14:08.108 fused_ordering(72) 00:14:08.108 fused_ordering(73) 00:14:08.108 fused_ordering(74) 00:14:08.108 fused_ordering(75) 00:14:08.108 fused_ordering(76) 00:14:08.108 fused_ordering(77) 00:14:08.108 fused_ordering(78) 00:14:08.108 fused_ordering(79) 00:14:08.108 fused_ordering(80) 00:14:08.108 fused_ordering(81) 00:14:08.108 fused_ordering(82) 00:14:08.108 fused_ordering(83) 00:14:08.108 fused_ordering(84) 00:14:08.108 fused_ordering(85) 00:14:08.108 fused_ordering(86) 00:14:08.108 fused_ordering(87) 00:14:08.108 fused_ordering(88) 00:14:08.108 fused_ordering(89) 00:14:08.108 fused_ordering(90) 00:14:08.108 fused_ordering(91) 00:14:08.108 fused_ordering(92) 00:14:08.108 fused_ordering(93) 00:14:08.108 fused_ordering(94) 00:14:08.108 fused_ordering(95) 00:14:08.108 fused_ordering(96) 00:14:08.108 fused_ordering(97) 00:14:08.108 fused_ordering(98) 00:14:08.108 fused_ordering(99) 00:14:08.108 fused_ordering(100) 00:14:08.108 fused_ordering(101) 00:14:08.108 fused_ordering(102) 00:14:08.108 fused_ordering(103) 00:14:08.108 fused_ordering(104) 00:14:08.108 fused_ordering(105) 00:14:08.108 fused_ordering(106) 00:14:08.108 fused_ordering(107) 00:14:08.108 fused_ordering(108) 00:14:08.108 fused_ordering(109) 00:14:08.108 fused_ordering(110) 00:14:08.108 fused_ordering(111) 00:14:08.108 fused_ordering(112) 00:14:08.108 fused_ordering(113) 00:14:08.108 fused_ordering(114) 00:14:08.108 fused_ordering(115) 00:14:08.108 fused_ordering(116) 00:14:08.108 fused_ordering(117) 00:14:08.108 fused_ordering(118) 00:14:08.108 fused_ordering(119) 00:14:08.108 fused_ordering(120) 00:14:08.109 fused_ordering(121) 00:14:08.109 fused_ordering(122) 00:14:08.109 fused_ordering(123) 00:14:08.109 fused_ordering(124) 00:14:08.109 fused_ordering(125) 00:14:08.109 fused_ordering(126) 00:14:08.109 fused_ordering(127) 00:14:08.109 fused_ordering(128) 00:14:08.109 fused_ordering(129) 00:14:08.109 fused_ordering(130) 00:14:08.109 fused_ordering(131) 00:14:08.109 fused_ordering(132) 00:14:08.109 fused_ordering(133) 00:14:08.109 fused_ordering(134) 00:14:08.109 fused_ordering(135) 00:14:08.109 fused_ordering(136) 00:14:08.109 fused_ordering(137) 00:14:08.109 fused_ordering(138) 00:14:08.109 fused_ordering(139) 00:14:08.109 fused_ordering(140) 00:14:08.109 fused_ordering(141) 00:14:08.109 fused_ordering(142) 00:14:08.109 fused_ordering(143) 00:14:08.109 fused_ordering(144) 00:14:08.109 fused_ordering(145) 00:14:08.109 fused_ordering(146) 00:14:08.109 fused_ordering(147) 00:14:08.109 fused_ordering(148) 00:14:08.109 fused_ordering(149) 00:14:08.109 fused_ordering(150) 00:14:08.109 fused_ordering(151) 00:14:08.109 fused_ordering(152) 00:14:08.109 fused_ordering(153) 00:14:08.109 fused_ordering(154) 00:14:08.109 fused_ordering(155) 00:14:08.109 fused_ordering(156) 00:14:08.109 fused_ordering(157) 00:14:08.109 fused_ordering(158) 00:14:08.109 fused_ordering(159) 00:14:08.109 fused_ordering(160) 00:14:08.109 fused_ordering(161) 00:14:08.109 fused_ordering(162) 00:14:08.109 fused_ordering(163) 00:14:08.109 fused_ordering(164) 00:14:08.109 fused_ordering(165) 00:14:08.109 fused_ordering(166) 00:14:08.109 fused_ordering(167) 00:14:08.109 fused_ordering(168) 00:14:08.109 fused_ordering(169) 00:14:08.109 fused_ordering(170) 00:14:08.109 fused_ordering(171) 00:14:08.109 fused_ordering(172) 00:14:08.109 fused_ordering(173) 00:14:08.109 fused_ordering(174) 00:14:08.109 fused_ordering(175) 00:14:08.109 fused_ordering(176) 00:14:08.109 fused_ordering(177) 00:14:08.109 fused_ordering(178) 00:14:08.109 fused_ordering(179) 00:14:08.109 fused_ordering(180) 00:14:08.109 fused_ordering(181) 00:14:08.109 fused_ordering(182) 00:14:08.109 fused_ordering(183) 00:14:08.109 fused_ordering(184) 00:14:08.109 fused_ordering(185) 00:14:08.109 fused_ordering(186) 00:14:08.109 fused_ordering(187) 00:14:08.109 fused_ordering(188) 00:14:08.109 fused_ordering(189) 00:14:08.109 fused_ordering(190) 00:14:08.109 fused_ordering(191) 00:14:08.109 fused_ordering(192) 00:14:08.109 fused_ordering(193) 00:14:08.109 fused_ordering(194) 00:14:08.109 fused_ordering(195) 00:14:08.109 fused_ordering(196) 00:14:08.109 fused_ordering(197) 00:14:08.109 fused_ordering(198) 00:14:08.109 fused_ordering(199) 00:14:08.109 fused_ordering(200) 00:14:08.109 fused_ordering(201) 00:14:08.109 fused_ordering(202) 00:14:08.109 fused_ordering(203) 00:14:08.109 fused_ordering(204) 00:14:08.109 fused_ordering(205) 00:14:09.046 fused_ordering(206) 00:14:09.046 fused_ordering(207) 00:14:09.046 fused_ordering(208) 00:14:09.046 fused_ordering(209) 00:14:09.046 fused_ordering(210) 00:14:09.046 fused_ordering(211) 00:14:09.046 fused_ordering(212) 00:14:09.046 fused_ordering(213) 00:14:09.046 fused_ordering(214) 00:14:09.046 fused_ordering(215) 00:14:09.046 fused_ordering(216) 00:14:09.046 fused_ordering(217) 00:14:09.046 fused_ordering(218) 00:14:09.046 fused_ordering(219) 00:14:09.046 fused_ordering(220) 00:14:09.046 fused_ordering(221) 00:14:09.046 fused_ordering(222) 00:14:09.046 fused_ordering(223) 00:14:09.046 fused_ordering(224) 00:14:09.046 fused_ordering(225) 00:14:09.046 fused_ordering(226) 00:14:09.046 fused_ordering(227) 00:14:09.046 fused_ordering(228) 00:14:09.046 fused_ordering(229) 00:14:09.046 fused_ordering(230) 00:14:09.046 fused_ordering(231) 00:14:09.046 fused_ordering(232) 00:14:09.046 fused_ordering(233) 00:14:09.046 fused_ordering(234) 00:14:09.046 fused_ordering(235) 00:14:09.046 fused_ordering(236) 00:14:09.046 fused_ordering(237) 00:14:09.047 fused_ordering(238) 00:14:09.047 fused_ordering(239) 00:14:09.047 fused_ordering(240) 00:14:09.047 fused_ordering(241) 00:14:09.047 fused_ordering(242) 00:14:09.047 fused_ordering(243) 00:14:09.047 fused_ordering(244) 00:14:09.047 fused_ordering(245) 00:14:09.047 fused_ordering(246) 00:14:09.047 fused_ordering(247) 00:14:09.047 fused_ordering(248) 00:14:09.047 fused_ordering(249) 00:14:09.047 fused_ordering(250) 00:14:09.047 fused_ordering(251) 00:14:09.047 fused_ordering(252) 00:14:09.047 fused_ordering(253) 00:14:09.047 fused_ordering(254) 00:14:09.047 fused_ordering(255) 00:14:09.047 fused_ordering(256) 00:14:09.047 fused_ordering(257) 00:14:09.047 fused_ordering(258) 00:14:09.047 fused_ordering(259) 00:14:09.047 fused_ordering(260) 00:14:09.047 fused_ordering(261) 00:14:09.047 fused_ordering(262) 00:14:09.047 fused_ordering(263) 00:14:09.047 fused_ordering(264) 00:14:09.047 fused_ordering(265) 00:14:09.047 fused_ordering(266) 00:14:09.047 fused_ordering(267) 00:14:09.047 fused_ordering(268) 00:14:09.047 fused_ordering(269) 00:14:09.047 fused_ordering(270) 00:14:09.047 fused_ordering(271) 00:14:09.047 fused_ordering(272) 00:14:09.047 fused_ordering(273) 00:14:09.047 fused_ordering(274) 00:14:09.047 fused_ordering(275) 00:14:09.047 fused_ordering(276) 00:14:09.047 fused_ordering(277) 00:14:09.047 fused_ordering(278) 00:14:09.047 fused_ordering(279) 00:14:09.047 fused_ordering(280) 00:14:09.047 fused_ordering(281) 00:14:09.047 fused_ordering(282) 00:14:09.047 fused_ordering(283) 00:14:09.047 fused_ordering(284) 00:14:09.047 fused_ordering(285) 00:14:09.047 fused_ordering(286) 00:14:09.047 fused_ordering(287) 00:14:09.047 fused_ordering(288) 00:14:09.047 fused_ordering(289) 00:14:09.047 fused_ordering(290) 00:14:09.047 fused_ordering(291) 00:14:09.047 fused_ordering(292) 00:14:09.047 fused_ordering(293) 00:14:09.047 fused_ordering(294) 00:14:09.047 fused_ordering(295) 00:14:09.047 fused_ordering(296) 00:14:09.047 fused_ordering(297) 00:14:09.047 fused_ordering(298) 00:14:09.047 fused_ordering(299) 00:14:09.047 fused_ordering(300) 00:14:09.047 fused_ordering(301) 00:14:09.047 fused_ordering(302) 00:14:09.047 fused_ordering(303) 00:14:09.047 fused_ordering(304) 00:14:09.047 fused_ordering(305) 00:14:09.047 fused_ordering(306) 00:14:09.047 fused_ordering(307) 00:14:09.047 fused_ordering(308) 00:14:09.047 fused_ordering(309) 00:14:09.047 fused_ordering(310) 00:14:09.047 fused_ordering(311) 00:14:09.047 fused_ordering(312) 00:14:09.047 fused_ordering(313) 00:14:09.047 fused_ordering(314) 00:14:09.047 fused_ordering(315) 00:14:09.047 fused_ordering(316) 00:14:09.047 fused_ordering(317) 00:14:09.047 fused_ordering(318) 00:14:09.047 fused_ordering(319) 00:14:09.047 fused_ordering(320) 00:14:09.047 fused_ordering(321) 00:14:09.047 fused_ordering(322) 00:14:09.047 fused_ordering(323) 00:14:09.047 fused_ordering(324) 00:14:09.047 fused_ordering(325) 00:14:09.047 fused_ordering(326) 00:14:09.047 fused_ordering(327) 00:14:09.047 fused_ordering(328) 00:14:09.047 fused_ordering(329) 00:14:09.047 fused_ordering(330) 00:14:09.047 fused_ordering(331) 00:14:09.047 fused_ordering(332) 00:14:09.047 fused_ordering(333) 00:14:09.047 fused_ordering(334) 00:14:09.047 fused_ordering(335) 00:14:09.047 fused_ordering(336) 00:14:09.047 fused_ordering(337) 00:14:09.047 fused_ordering(338) 00:14:09.047 fused_ordering(339) 00:14:09.047 fused_ordering(340) 00:14:09.047 fused_ordering(341) 00:14:09.047 fused_ordering(342) 00:14:09.047 fused_ordering(343) 00:14:09.047 fused_ordering(344) 00:14:09.047 fused_ordering(345) 00:14:09.047 fused_ordering(346) 00:14:09.047 fused_ordering(347) 00:14:09.047 fused_ordering(348) 00:14:09.047 fused_ordering(349) 00:14:09.047 fused_ordering(350) 00:14:09.047 fused_ordering(351) 00:14:09.047 fused_ordering(352) 00:14:09.047 fused_ordering(353) 00:14:09.047 fused_ordering(354) 00:14:09.047 fused_ordering(355) 00:14:09.047 fused_ordering(356) 00:14:09.047 fused_ordering(357) 00:14:09.047 fused_ordering(358) 00:14:09.047 fused_ordering(359) 00:14:09.047 fused_ordering(360) 00:14:09.047 fused_ordering(361) 00:14:09.047 fused_ordering(362) 00:14:09.047 fused_ordering(363) 00:14:09.047 fused_ordering(364) 00:14:09.047 fused_ordering(365) 00:14:09.047 fused_ordering(366) 00:14:09.047 fused_ordering(367) 00:14:09.047 fused_ordering(368) 00:14:09.047 fused_ordering(369) 00:14:09.047 fused_ordering(370) 00:14:09.047 fused_ordering(371) 00:14:09.047 fused_ordering(372) 00:14:09.047 fused_ordering(373) 00:14:09.047 fused_ordering(374) 00:14:09.047 fused_ordering(375) 00:14:09.047 fused_ordering(376) 00:14:09.047 fused_ordering(377) 00:14:09.047 fused_ordering(378) 00:14:09.047 fused_ordering(379) 00:14:09.047 fused_ordering(380) 00:14:09.047 fused_ordering(381) 00:14:09.047 fused_ordering(382) 00:14:09.047 fused_ordering(383) 00:14:09.047 fused_ordering(384) 00:14:09.047 fused_ordering(385) 00:14:09.047 fused_ordering(386) 00:14:09.047 fused_ordering(387) 00:14:09.047 fused_ordering(388) 00:14:09.047 fused_ordering(389) 00:14:09.047 fused_ordering(390) 00:14:09.047 fused_ordering(391) 00:14:09.047 fused_ordering(392) 00:14:09.047 fused_ordering(393) 00:14:09.047 fused_ordering(394) 00:14:09.047 fused_ordering(395) 00:14:09.047 fused_ordering(396) 00:14:09.047 fused_ordering(397) 00:14:09.047 fused_ordering(398) 00:14:09.047 fused_ordering(399) 00:14:09.047 fused_ordering(400) 00:14:09.047 fused_ordering(401) 00:14:09.047 fused_ordering(402) 00:14:09.047 fused_ordering(403) 00:14:09.047 fused_ordering(404) 00:14:09.047 fused_ordering(405) 00:14:09.047 fused_ordering(406) 00:14:09.047 fused_ordering(407) 00:14:09.047 fused_ordering(408) 00:14:09.047 fused_ordering(409) 00:14:09.047 fused_ordering(410) 00:14:09.615 fused_ordering(411) 00:14:09.615 fused_ordering(412) 00:14:09.615 fused_ordering(413) 00:14:09.615 fused_ordering(414) 00:14:09.615 fused_ordering(415) 00:14:09.615 fused_ordering(416) 00:14:09.615 fused_ordering(417) 00:14:09.615 fused_ordering(418) 00:14:09.615 fused_ordering(419) 00:14:09.615 fused_ordering(420) 00:14:09.615 fused_ordering(421) 00:14:09.615 fused_ordering(422) 00:14:09.615 fused_ordering(423) 00:14:09.615 fused_ordering(424) 00:14:09.615 fused_ordering(425) 00:14:09.615 fused_ordering(426) 00:14:09.615 fused_ordering(427) 00:14:09.615 fused_ordering(428) 00:14:09.616 fused_ordering(429) 00:14:09.616 fused_ordering(430) 00:14:09.616 fused_ordering(431) 00:14:09.616 fused_ordering(432) 00:14:09.616 fused_ordering(433) 00:14:09.616 fused_ordering(434) 00:14:09.616 fused_ordering(435) 00:14:09.616 fused_ordering(436) 00:14:09.616 fused_ordering(437) 00:14:09.616 fused_ordering(438) 00:14:09.616 fused_ordering(439) 00:14:09.616 fused_ordering(440) 00:14:09.616 fused_ordering(441) 00:14:09.616 fused_ordering(442) 00:14:09.616 fused_ordering(443) 00:14:09.616 fused_ordering(444) 00:14:09.616 fused_ordering(445) 00:14:09.616 fused_ordering(446) 00:14:09.616 fused_ordering(447) 00:14:09.616 fused_ordering(448) 00:14:09.616 fused_ordering(449) 00:14:09.616 fused_ordering(450) 00:14:09.616 fused_ordering(451) 00:14:09.616 fused_ordering(452) 00:14:09.616 fused_ordering(453) 00:14:09.616 fused_ordering(454) 00:14:09.616 fused_ordering(455) 00:14:09.616 fused_ordering(456) 00:14:09.616 fused_ordering(457) 00:14:09.616 fused_ordering(458) 00:14:09.616 fused_ordering(459) 00:14:09.616 fused_ordering(460) 00:14:09.616 fused_ordering(461) 00:14:09.616 fused_ordering(462) 00:14:09.616 fused_ordering(463) 00:14:09.616 fused_ordering(464) 00:14:09.616 fused_ordering(465) 00:14:09.616 fused_ordering(466) 00:14:09.616 fused_ordering(467) 00:14:09.616 fused_ordering(468) 00:14:09.616 fused_ordering(469) 00:14:09.616 fused_ordering(470) 00:14:09.616 fused_ordering(471) 00:14:09.616 fused_ordering(472) 00:14:09.616 fused_ordering(473) 00:14:09.616 fused_ordering(474) 00:14:09.616 fused_ordering(475) 00:14:09.616 fused_ordering(476) 00:14:09.616 fused_ordering(477) 00:14:09.616 fused_ordering(478) 00:14:09.616 fused_ordering(479) 00:14:09.616 fused_ordering(480) 00:14:09.616 fused_ordering(481) 00:14:09.616 fused_ordering(482) 00:14:09.616 fused_ordering(483) 00:14:09.616 fused_ordering(484) 00:14:09.616 fused_ordering(485) 00:14:09.616 fused_ordering(486) 00:14:09.616 fused_ordering(487) 00:14:09.616 fused_ordering(488) 00:14:09.616 fused_ordering(489) 00:14:09.616 fused_ordering(490) 00:14:09.616 fused_ordering(491) 00:14:09.616 fused_ordering(492) 00:14:09.616 fused_ordering(493) 00:14:09.616 fused_ordering(494) 00:14:09.616 fused_ordering(495) 00:14:09.616 fused_ordering(496) 00:14:09.616 fused_ordering(497) 00:14:09.616 fused_ordering(498) 00:14:09.616 fused_ordering(499) 00:14:09.616 fused_ordering(500) 00:14:09.616 fused_ordering(501) 00:14:09.616 fused_ordering(502) 00:14:09.616 fused_ordering(503) 00:14:09.616 fused_ordering(504) 00:14:09.616 fused_ordering(505) 00:14:09.616 fused_ordering(506) 00:14:09.616 fused_ordering(507) 00:14:09.616 fused_ordering(508) 00:14:09.616 fused_ordering(509) 00:14:09.616 fused_ordering(510) 00:14:09.616 fused_ordering(511) 00:14:09.616 fused_ordering(512) 00:14:09.616 fused_ordering(513) 00:14:09.616 fused_ordering(514) 00:14:09.616 fused_ordering(515) 00:14:09.616 fused_ordering(516) 00:14:09.616 fused_ordering(517) 00:14:09.616 fused_ordering(518) 00:14:09.616 fused_ordering(519) 00:14:09.616 fused_ordering(520) 00:14:09.616 fused_ordering(521) 00:14:09.616 fused_ordering(522) 00:14:09.616 fused_ordering(523) 00:14:09.616 fused_ordering(524) 00:14:09.616 fused_ordering(525) 00:14:09.616 fused_ordering(526) 00:14:09.616 fused_ordering(527) 00:14:09.616 fused_ordering(528) 00:14:09.616 fused_ordering(529) 00:14:09.616 fused_ordering(530) 00:14:09.616 fused_ordering(531) 00:14:09.616 fused_ordering(532) 00:14:09.616 fused_ordering(533) 00:14:09.616 fused_ordering(534) 00:14:09.616 fused_ordering(535) 00:14:09.616 fused_ordering(536) 00:14:09.616 fused_ordering(537) 00:14:09.616 fused_ordering(538) 00:14:09.616 fused_ordering(539) 00:14:09.616 fused_ordering(540) 00:14:09.616 fused_ordering(541) 00:14:09.616 fused_ordering(542) 00:14:09.616 fused_ordering(543) 00:14:09.616 fused_ordering(544) 00:14:09.616 fused_ordering(545) 00:14:09.616 fused_ordering(546) 00:14:09.616 fused_ordering(547) 00:14:09.616 fused_ordering(548) 00:14:09.616 fused_ordering(549) 00:14:09.616 fused_ordering(550) 00:14:09.616 fused_ordering(551) 00:14:09.616 fused_ordering(552) 00:14:09.616 fused_ordering(553) 00:14:09.616 fused_ordering(554) 00:14:09.616 fused_ordering(555) 00:14:09.616 fused_ordering(556) 00:14:09.616 fused_ordering(557) 00:14:09.616 fused_ordering(558) 00:14:09.616 fused_ordering(559) 00:14:09.616 fused_ordering(560) 00:14:09.616 fused_ordering(561) 00:14:09.616 fused_ordering(562) 00:14:09.616 fused_ordering(563) 00:14:09.616 fused_ordering(564) 00:14:09.616 fused_ordering(565) 00:14:09.616 fused_ordering(566) 00:14:09.616 fused_ordering(567) 00:14:09.616 fused_ordering(568) 00:14:09.616 fused_ordering(569) 00:14:09.616 fused_ordering(570) 00:14:09.616 fused_ordering(571) 00:14:09.616 fused_ordering(572) 00:14:09.616 fused_ordering(573) 00:14:09.616 fused_ordering(574) 00:14:09.616 fused_ordering(575) 00:14:09.616 fused_ordering(576) 00:14:09.616 fused_ordering(577) 00:14:09.616 fused_ordering(578) 00:14:09.616 fused_ordering(579) 00:14:09.616 fused_ordering(580) 00:14:09.616 fused_ordering(581) 00:14:09.616 fused_ordering(582) 00:14:09.616 fused_ordering(583) 00:14:09.616 fused_ordering(584) 00:14:09.616 fused_ordering(585) 00:14:09.616 fused_ordering(586) 00:14:09.616 fused_ordering(587) 00:14:09.616 fused_ordering(588) 00:14:09.616 fused_ordering(589) 00:14:09.616 fused_ordering(590) 00:14:09.616 fused_ordering(591) 00:14:09.616 fused_ordering(592) 00:14:09.616 fused_ordering(593) 00:14:09.616 fused_ordering(594) 00:14:09.616 fused_ordering(595) 00:14:09.616 fused_ordering(596) 00:14:09.616 fused_ordering(597) 00:14:09.616 fused_ordering(598) 00:14:09.616 fused_ordering(599) 00:14:09.616 fused_ordering(600) 00:14:09.616 fused_ordering(601) 00:14:09.616 fused_ordering(602) 00:14:09.616 fused_ordering(603) 00:14:09.616 fused_ordering(604) 00:14:09.616 fused_ordering(605) 00:14:09.616 fused_ordering(606) 00:14:09.616 fused_ordering(607) 00:14:09.616 fused_ordering(608) 00:14:09.616 fused_ordering(609) 00:14:09.616 fused_ordering(610) 00:14:09.616 fused_ordering(611) 00:14:09.616 fused_ordering(612) 00:14:09.616 fused_ordering(613) 00:14:09.616 fused_ordering(614) 00:14:09.616 fused_ordering(615) 00:14:10.551 fused_ordering(616) 00:14:10.551 fused_ordering(617) 00:14:10.551 fused_ordering(618) 00:14:10.551 fused_ordering(619) 00:14:10.551 fused_ordering(620) 00:14:10.551 fused_ordering(621) 00:14:10.551 fused_ordering(622) 00:14:10.551 fused_ordering(623) 00:14:10.551 fused_ordering(624) 00:14:10.551 fused_ordering(625) 00:14:10.551 fused_ordering(626) 00:14:10.551 fused_ordering(627) 00:14:10.551 fused_ordering(628) 00:14:10.551 fused_ordering(629) 00:14:10.551 fused_ordering(630) 00:14:10.551 fused_ordering(631) 00:14:10.551 fused_ordering(632) 00:14:10.551 fused_ordering(633) 00:14:10.551 fused_ordering(634) 00:14:10.551 fused_ordering(635) 00:14:10.551 fused_ordering(636) 00:14:10.551 fused_ordering(637) 00:14:10.551 fused_ordering(638) 00:14:10.551 fused_ordering(639) 00:14:10.551 fused_ordering(640) 00:14:10.551 fused_ordering(641) 00:14:10.551 fused_ordering(642) 00:14:10.551 fused_ordering(643) 00:14:10.551 fused_ordering(644) 00:14:10.551 fused_ordering(645) 00:14:10.551 fused_ordering(646) 00:14:10.551 fused_ordering(647) 00:14:10.551 fused_ordering(648) 00:14:10.551 fused_ordering(649) 00:14:10.551 fused_ordering(650) 00:14:10.551 fused_ordering(651) 00:14:10.551 fused_ordering(652) 00:14:10.551 fused_ordering(653) 00:14:10.551 fused_ordering(654) 00:14:10.551 fused_ordering(655) 00:14:10.551 fused_ordering(656) 00:14:10.551 fused_ordering(657) 00:14:10.551 fused_ordering(658) 00:14:10.551 fused_ordering(659) 00:14:10.551 fused_ordering(660) 00:14:10.551 fused_ordering(661) 00:14:10.551 fused_ordering(662) 00:14:10.551 fused_ordering(663) 00:14:10.551 fused_ordering(664) 00:14:10.551 fused_ordering(665) 00:14:10.551 fused_ordering(666) 00:14:10.551 fused_ordering(667) 00:14:10.551 fused_ordering(668) 00:14:10.551 fused_ordering(669) 00:14:10.551 fused_ordering(670) 00:14:10.551 fused_ordering(671) 00:14:10.551 fused_ordering(672) 00:14:10.551 fused_ordering(673) 00:14:10.551 fused_ordering(674) 00:14:10.551 fused_ordering(675) 00:14:10.551 fused_ordering(676) 00:14:10.551 fused_ordering(677) 00:14:10.551 fused_ordering(678) 00:14:10.551 fused_ordering(679) 00:14:10.551 fused_ordering(680) 00:14:10.551 fused_ordering(681) 00:14:10.551 fused_ordering(682) 00:14:10.551 fused_ordering(683) 00:14:10.551 fused_ordering(684) 00:14:10.551 fused_ordering(685) 00:14:10.551 fused_ordering(686) 00:14:10.551 fused_ordering(687) 00:14:10.551 fused_ordering(688) 00:14:10.551 fused_ordering(689) 00:14:10.551 fused_ordering(690) 00:14:10.551 fused_ordering(691) 00:14:10.551 fused_ordering(692) 00:14:10.551 fused_ordering(693) 00:14:10.551 fused_ordering(694) 00:14:10.551 fused_ordering(695) 00:14:10.551 fused_ordering(696) 00:14:10.551 fused_ordering(697) 00:14:10.551 fused_ordering(698) 00:14:10.551 fused_ordering(699) 00:14:10.551 fused_ordering(700) 00:14:10.551 fused_ordering(701) 00:14:10.551 fused_ordering(702) 00:14:10.551 fused_ordering(703) 00:14:10.551 fused_ordering(704) 00:14:10.551 fused_ordering(705) 00:14:10.551 fused_ordering(706) 00:14:10.551 fused_ordering(707) 00:14:10.551 fused_ordering(708) 00:14:10.551 fused_ordering(709) 00:14:10.551 fused_ordering(710) 00:14:10.551 fused_ordering(711) 00:14:10.551 fused_ordering(712) 00:14:10.551 fused_ordering(713) 00:14:10.551 fused_ordering(714) 00:14:10.551 fused_ordering(715) 00:14:10.551 fused_ordering(716) 00:14:10.551 fused_ordering(717) 00:14:10.551 fused_ordering(718) 00:14:10.551 fused_ordering(719) 00:14:10.551 fused_ordering(720) 00:14:10.551 fused_ordering(721) 00:14:10.552 fused_ordering(722) 00:14:10.552 fused_ordering(723) 00:14:10.552 fused_ordering(724) 00:14:10.552 fused_ordering(725) 00:14:10.552 fused_ordering(726) 00:14:10.552 fused_ordering(727) 00:14:10.552 fused_ordering(728) 00:14:10.552 fused_ordering(729) 00:14:10.552 fused_ordering(730) 00:14:10.552 fused_ordering(731) 00:14:10.552 fused_ordering(732) 00:14:10.552 fused_ordering(733) 00:14:10.552 fused_ordering(734) 00:14:10.552 fused_ordering(735) 00:14:10.552 fused_ordering(736) 00:14:10.552 fused_ordering(737) 00:14:10.552 fused_ordering(738) 00:14:10.552 fused_ordering(739) 00:14:10.552 fused_ordering(740) 00:14:10.552 fused_ordering(741) 00:14:10.552 fused_ordering(742) 00:14:10.552 fused_ordering(743) 00:14:10.552 fused_ordering(744) 00:14:10.552 fused_ordering(745) 00:14:10.552 fused_ordering(746) 00:14:10.552 fused_ordering(747) 00:14:10.552 fused_ordering(748) 00:14:10.552 fused_ordering(749) 00:14:10.552 fused_ordering(750) 00:14:10.552 fused_ordering(751) 00:14:10.552 fused_ordering(752) 00:14:10.552 fused_ordering(753) 00:14:10.552 fused_ordering(754) 00:14:10.552 fused_ordering(755) 00:14:10.552 fused_ordering(756) 00:14:10.552 fused_ordering(757) 00:14:10.552 fused_ordering(758) 00:14:10.552 fused_ordering(759) 00:14:10.552 fused_ordering(760) 00:14:10.552 fused_ordering(761) 00:14:10.552 fused_ordering(762) 00:14:10.552 fused_ordering(763) 00:14:10.552 fused_ordering(764) 00:14:10.552 fused_ordering(765) 00:14:10.552 fused_ordering(766) 00:14:10.552 fused_ordering(767) 00:14:10.552 fused_ordering(768) 00:14:10.552 fused_ordering(769) 00:14:10.552 fused_ordering(770) 00:14:10.552 fused_ordering(771) 00:14:10.552 fused_ordering(772) 00:14:10.552 fused_ordering(773) 00:14:10.552 fused_ordering(774) 00:14:10.552 fused_ordering(775) 00:14:10.552 fused_ordering(776) 00:14:10.552 fused_ordering(777) 00:14:10.552 fused_ordering(778) 00:14:10.552 fused_ordering(779) 00:14:10.552 fused_ordering(780) 00:14:10.552 fused_ordering(781) 00:14:10.552 fused_ordering(782) 00:14:10.552 fused_ordering(783) 00:14:10.552 fused_ordering(784) 00:14:10.552 fused_ordering(785) 00:14:10.552 fused_ordering(786) 00:14:10.552 fused_ordering(787) 00:14:10.552 fused_ordering(788) 00:14:10.552 fused_ordering(789) 00:14:10.552 fused_ordering(790) 00:14:10.552 fused_ordering(791) 00:14:10.552 fused_ordering(792) 00:14:10.552 fused_ordering(793) 00:14:10.552 fused_ordering(794) 00:14:10.552 fused_ordering(795) 00:14:10.552 fused_ordering(796) 00:14:10.552 fused_ordering(797) 00:14:10.552 fused_ordering(798) 00:14:10.552 fused_ordering(799) 00:14:10.552 fused_ordering(800) 00:14:10.552 fused_ordering(801) 00:14:10.552 fused_ordering(802) 00:14:10.552 fused_ordering(803) 00:14:10.552 fused_ordering(804) 00:14:10.552 fused_ordering(805) 00:14:10.552 fused_ordering(806) 00:14:10.552 fused_ordering(807) 00:14:10.552 fused_ordering(808) 00:14:10.552 fused_ordering(809) 00:14:10.552 fused_ordering(810) 00:14:10.552 fused_ordering(811) 00:14:10.552 fused_ordering(812) 00:14:10.552 fused_ordering(813) 00:14:10.552 fused_ordering(814) 00:14:10.552 fused_ordering(815) 00:14:10.552 fused_ordering(816) 00:14:10.552 fused_ordering(817) 00:14:10.552 fused_ordering(818) 00:14:10.552 fused_ordering(819) 00:14:10.552 fused_ordering(820) 00:14:11.520 fused_ordering(821) 00:14:11.520 fused_ordering(822) 00:14:11.520 fused_ordering(823) 00:14:11.520 fused_ordering(824) 00:14:11.520 fused_ordering(825) 00:14:11.520 fused_ordering(826) 00:14:11.520 fused_ordering(827) 00:14:11.520 fused_ordering(828) 00:14:11.520 fused_ordering(829) 00:14:11.520 fused_ordering(830) 00:14:11.520 fused_ordering(831) 00:14:11.521 fused_ordering(832) 00:14:11.521 fused_ordering(833) 00:14:11.521 fused_ordering(834) 00:14:11.521 fused_ordering(835) 00:14:11.521 fused_ordering(836) 00:14:11.521 fused_ordering(837) 00:14:11.521 fused_ordering(838) 00:14:11.521 fused_ordering(839) 00:14:11.521 fused_ordering(840) 00:14:11.521 fused_ordering(841) 00:14:11.521 fused_ordering(842) 00:14:11.521 fused_ordering(843) 00:14:11.521 fused_ordering(844) 00:14:11.521 fused_ordering(845) 00:14:11.521 fused_ordering(846) 00:14:11.521 fused_ordering(847) 00:14:11.521 fused_ordering(848) 00:14:11.521 fused_ordering(849) 00:14:11.521 fused_ordering(850) 00:14:11.521 fused_ordering(851) 00:14:11.521 fused_ordering(852) 00:14:11.521 fused_ordering(853) 00:14:11.521 fused_ordering(854) 00:14:11.521 fused_ordering(855) 00:14:11.521 fused_ordering(856) 00:14:11.521 fused_ordering(857) 00:14:11.521 fused_ordering(858) 00:14:11.521 fused_ordering(859) 00:14:11.521 fused_ordering(860) 00:14:11.521 fused_ordering(861) 00:14:11.521 fused_ordering(862) 00:14:11.521 fused_ordering(863) 00:14:11.521 fused_ordering(864) 00:14:11.521 fused_ordering(865) 00:14:11.521 fused_ordering(866) 00:14:11.521 fused_ordering(867) 00:14:11.521 fused_ordering(868) 00:14:11.521 fused_ordering(869) 00:14:11.521 fused_ordering(870) 00:14:11.521 fused_ordering(871) 00:14:11.521 fused_ordering(872) 00:14:11.521 fused_ordering(873) 00:14:11.521 fused_ordering(874) 00:14:11.521 fused_ordering(875) 00:14:11.521 fused_ordering(876) 00:14:11.521 fused_ordering(877) 00:14:11.521 fused_ordering(878) 00:14:11.521 fused_ordering(879) 00:14:11.521 fused_ordering(880) 00:14:11.521 fused_ordering(881) 00:14:11.521 fused_ordering(882) 00:14:11.521 fused_ordering(883) 00:14:11.521 fused_ordering(884) 00:14:11.521 fused_ordering(885) 00:14:11.521 fused_ordering(886) 00:14:11.521 fused_ordering(887) 00:14:11.521 fused_ordering(888) 00:14:11.521 fused_ordering(889) 00:14:11.521 fused_ordering(890) 00:14:11.521 fused_ordering(891) 00:14:11.521 fused_ordering(892) 00:14:11.521 fused_ordering(893) 00:14:11.521 fused_ordering(894) 00:14:11.521 fused_ordering(895) 00:14:11.521 fused_ordering(896) 00:14:11.521 fused_ordering(897) 00:14:11.521 fused_ordering(898) 00:14:11.521 fused_ordering(899) 00:14:11.521 fused_ordering(900) 00:14:11.521 fused_ordering(901) 00:14:11.521 fused_ordering(902) 00:14:11.521 fused_ordering(903) 00:14:11.521 fused_ordering(904) 00:14:11.521 fused_ordering(905) 00:14:11.521 fused_ordering(906) 00:14:11.521 fused_ordering(907) 00:14:11.521 fused_ordering(908) 00:14:11.521 fused_ordering(909) 00:14:11.521 fused_ordering(910) 00:14:11.521 fused_ordering(911) 00:14:11.521 fused_ordering(912) 00:14:11.521 fused_ordering(913) 00:14:11.521 fused_ordering(914) 00:14:11.521 fused_ordering(915) 00:14:11.521 fused_ordering(916) 00:14:11.521 fused_ordering(917) 00:14:11.521 fused_ordering(918) 00:14:11.521 fused_ordering(919) 00:14:11.521 fused_ordering(920) 00:14:11.521 fused_ordering(921) 00:14:11.521 fused_ordering(922) 00:14:11.521 fused_ordering(923) 00:14:11.521 fused_ordering(924) 00:14:11.521 fused_ordering(925) 00:14:11.521 fused_ordering(926) 00:14:11.521 fused_ordering(927) 00:14:11.521 fused_ordering(928) 00:14:11.521 fused_ordering(929) 00:14:11.521 fused_ordering(930) 00:14:11.521 fused_ordering(931) 00:14:11.521 fused_ordering(932) 00:14:11.521 fused_ordering(933) 00:14:11.521 fused_ordering(934) 00:14:11.521 fused_ordering(935) 00:14:11.521 fused_ordering(936) 00:14:11.521 fused_ordering(937) 00:14:11.521 fused_ordering(938) 00:14:11.521 fused_ordering(939) 00:14:11.521 fused_ordering(940) 00:14:11.521 fused_ordering(941) 00:14:11.521 fused_ordering(942) 00:14:11.521 fused_ordering(943) 00:14:11.521 fused_ordering(944) 00:14:11.521 fused_ordering(945) 00:14:11.521 fused_ordering(946) 00:14:11.521 fused_ordering(947) 00:14:11.521 fused_ordering(948) 00:14:11.521 fused_ordering(949) 00:14:11.521 fused_ordering(950) 00:14:11.521 fused_ordering(951) 00:14:11.521 fused_ordering(952) 00:14:11.521 fused_ordering(953) 00:14:11.521 fused_ordering(954) 00:14:11.521 fused_ordering(955) 00:14:11.521 fused_ordering(956) 00:14:11.521 fused_ordering(957) 00:14:11.521 fused_ordering(958) 00:14:11.521 fused_ordering(959) 00:14:11.521 fused_ordering(960) 00:14:11.521 fused_ordering(961) 00:14:11.521 fused_ordering(962) 00:14:11.521 fused_ordering(963) 00:14:11.521 fused_ordering(964) 00:14:11.521 fused_ordering(965) 00:14:11.521 fused_ordering(966) 00:14:11.521 fused_ordering(967) 00:14:11.521 fused_ordering(968) 00:14:11.521 fused_ordering(969) 00:14:11.521 fused_ordering(970) 00:14:11.521 fused_ordering(971) 00:14:11.521 fused_ordering(972) 00:14:11.521 fused_ordering(973) 00:14:11.521 fused_ordering(974) 00:14:11.521 fused_ordering(975) 00:14:11.521 fused_ordering(976) 00:14:11.521 fused_ordering(977) 00:14:11.521 fused_ordering(978) 00:14:11.521 fused_ordering(979) 00:14:11.521 fused_ordering(980) 00:14:11.521 fused_ordering(981) 00:14:11.521 fused_ordering(982) 00:14:11.521 fused_ordering(983) 00:14:11.521 fused_ordering(984) 00:14:11.521 fused_ordering(985) 00:14:11.521 fused_ordering(986) 00:14:11.521 fused_ordering(987) 00:14:11.521 fused_ordering(988) 00:14:11.521 fused_ordering(989) 00:14:11.521 fused_ordering(990) 00:14:11.521 fused_ordering(991) 00:14:11.521 fused_ordering(992) 00:14:11.521 fused_ordering(993) 00:14:11.521 fused_ordering(994) 00:14:11.521 fused_ordering(995) 00:14:11.521 fused_ordering(996) 00:14:11.521 fused_ordering(997) 00:14:11.521 fused_ordering(998) 00:14:11.521 fused_ordering(999) 00:14:11.521 fused_ordering(1000) 00:14:11.521 fused_ordering(1001) 00:14:11.521 fused_ordering(1002) 00:14:11.521 fused_ordering(1003) 00:14:11.521 fused_ordering(1004) 00:14:11.521 fused_ordering(1005) 00:14:11.521 fused_ordering(1006) 00:14:11.521 fused_ordering(1007) 00:14:11.521 fused_ordering(1008) 00:14:11.521 fused_ordering(1009) 00:14:11.521 fused_ordering(1010) 00:14:11.521 fused_ordering(1011) 00:14:11.521 fused_ordering(1012) 00:14:11.521 fused_ordering(1013) 00:14:11.521 fused_ordering(1014) 00:14:11.521 fused_ordering(1015) 00:14:11.521 fused_ordering(1016) 00:14:11.521 fused_ordering(1017) 00:14:11.521 fused_ordering(1018) 00:14:11.521 fused_ordering(1019) 00:14:11.521 fused_ordering(1020) 00:14:11.521 fused_ordering(1021) 00:14:11.521 fused_ordering(1022) 00:14:11.521 fused_ordering(1023) 00:14:11.521 13:56:02 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:11.521 13:56:02 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:11.521 13:56:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:11.521 13:56:02 -- nvmf/common.sh@116 -- # sync 00:14:11.521 13:56:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:11.521 13:56:02 -- nvmf/common.sh@119 -- # set +e 00:14:11.521 13:56:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:11.521 13:56:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:11.521 rmmod nvme_tcp 00:14:11.521 rmmod nvme_fabrics 00:14:11.521 rmmod nvme_keyring 00:14:11.521 13:56:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:11.521 13:56:02 -- nvmf/common.sh@123 -- # set -e 00:14:11.521 13:56:02 -- nvmf/common.sh@124 -- # return 0 00:14:11.521 13:56:02 -- nvmf/common.sh@477 -- # '[' -n 3205293 ']' 00:14:11.521 13:56:02 -- nvmf/common.sh@478 -- # killprocess 3205293 00:14:11.521 13:56:02 -- common/autotest_common.sh@926 -- # '[' -z 3205293 ']' 00:14:11.521 13:56:02 -- common/autotest_common.sh@930 -- # kill -0 3205293 00:14:11.521 13:56:02 -- common/autotest_common.sh@931 -- # uname 00:14:11.521 13:56:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:11.521 13:56:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3205293 00:14:11.521 13:56:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:11.521 13:56:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:11.521 13:56:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3205293' 00:14:11.521 killing process with pid 3205293 00:14:11.521 13:56:02 -- common/autotest_common.sh@945 -- # kill 3205293 00:14:11.521 13:56:02 -- common/autotest_common.sh@950 -- # wait 3205293 00:14:11.789 13:56:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:11.789 13:56:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:11.789 13:56:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:11.789 13:56:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.789 13:56:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:11.789 13:56:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.789 13:56:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.789 13:56:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.692 13:56:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:13.692 00:14:13.692 real 0m12.749s 00:14:13.692 user 0m7.974s 00:14:13.692 sys 0m7.043s 00:14:13.692 13:56:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.692 13:56:04 -- common/autotest_common.sh@10 -- # set +x 00:14:13.692 ************************************ 00:14:13.692 END TEST nvmf_fused_ordering 00:14:13.692 ************************************ 00:14:13.692 13:56:04 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:13.692 13:56:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:13.692 13:56:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:13.692 13:56:04 -- common/autotest_common.sh@10 -- # set +x 00:14:13.692 ************************************ 00:14:13.693 START TEST nvmf_delete_subsystem 00:14:13.693 ************************************ 00:14:13.693 13:56:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:13.693 * Looking for test storage... 00:14:13.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.693 13:56:04 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.951 13:56:04 -- nvmf/common.sh@7 -- # uname -s 00:14:13.951 13:56:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.951 13:56:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.951 13:56:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.951 13:56:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.951 13:56:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.951 13:56:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.951 13:56:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.951 13:56:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.951 13:56:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.951 13:56:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.951 13:56:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.951 13:56:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.951 13:56:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.951 13:56:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.951 13:56:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.951 13:56:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.951 13:56:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.951 13:56:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.951 13:56:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.951 13:56:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.951 13:56:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.952 13:56:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.952 13:56:04 -- paths/export.sh@5 -- # export PATH 00:14:13.952 13:56:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.952 13:56:04 -- nvmf/common.sh@46 -- # : 0 00:14:13.952 13:56:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:13.952 13:56:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:13.952 13:56:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:13.952 13:56:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.952 13:56:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.952 13:56:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:13.952 13:56:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:13.952 13:56:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:13.952 13:56:04 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:13.952 13:56:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:13.952 13:56:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.952 13:56:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:13.952 13:56:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:13.952 13:56:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:13.952 13:56:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.952 13:56:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.952 13:56:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.952 13:56:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:13.952 13:56:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:13.952 13:56:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:13.952 13:56:04 -- common/autotest_common.sh@10 -- # set +x 00:14:19.219 13:56:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:19.219 13:56:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:19.219 13:56:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:19.219 13:56:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:19.219 13:56:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:19.219 13:56:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:19.219 13:56:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:19.219 13:56:09 -- nvmf/common.sh@294 -- # net_devs=() 00:14:19.219 13:56:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:19.219 13:56:09 -- nvmf/common.sh@295 -- # e810=() 00:14:19.219 13:56:09 -- nvmf/common.sh@295 -- # local -ga e810 00:14:19.219 13:56:09 -- nvmf/common.sh@296 -- # x722=() 00:14:19.219 13:56:09 -- nvmf/common.sh@296 -- # local -ga x722 00:14:19.219 13:56:09 -- nvmf/common.sh@297 -- # mlx=() 00:14:19.219 13:56:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:19.219 13:56:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.219 13:56:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:19.219 13:56:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:19.219 13:56:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:19.219 13:56:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.219 13:56:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:19.219 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:19.219 13:56:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:19.219 13:56:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:19.219 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:19.219 13:56:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:19.219 13:56:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:19.219 13:56:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.219 13:56:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.219 13:56:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.219 13:56:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.219 13:56:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:19.219 Found net devices under 0000:86:00.0: cvl_0_0 00:14:19.219 13:56:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.219 13:56:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:19.219 13:56:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.219 13:56:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:19.219 13:56:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.219 13:56:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:19.219 Found net devices under 0000:86:00.1: cvl_0_1 00:14:19.219 13:56:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.219 13:56:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:19.219 13:56:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:19.220 13:56:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:19.220 13:56:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:19.220 13:56:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:19.220 13:56:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.220 13:56:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.220 13:56:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.220 13:56:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:19.220 13:56:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.220 13:56:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.220 13:56:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:19.220 13:56:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.220 13:56:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.220 13:56:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:19.220 13:56:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:19.220 13:56:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.220 13:56:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.220 13:56:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.220 13:56:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.220 13:56:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:19.220 13:56:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.220 13:56:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.220 13:56:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.220 13:56:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:19.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:14:19.220 00:14:19.220 --- 10.0.0.2 ping statistics --- 00:14:19.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.220 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:14:19.220 13:56:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:14:19.220 00:14:19.220 --- 10.0.0.1 ping statistics --- 00:14:19.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.220 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:14:19.220 13:56:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.220 13:56:09 -- nvmf/common.sh@410 -- # return 0 00:14:19.220 13:56:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:19.220 13:56:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.220 13:56:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:19.220 13:56:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:19.220 13:56:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.220 13:56:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:19.220 13:56:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:19.220 13:56:09 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:19.220 13:56:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:19.220 13:56:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:19.220 13:56:09 -- common/autotest_common.sh@10 -- # set +x 00:14:19.220 13:56:09 -- nvmf/common.sh@469 -- # nvmfpid=3209575 00:14:19.220 13:56:09 -- nvmf/common.sh@470 -- # waitforlisten 3209575 00:14:19.220 13:56:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:19.220 13:56:09 -- common/autotest_common.sh@819 -- # '[' -z 3209575 ']' 00:14:19.220 13:56:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.220 13:56:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:19.220 13:56:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.220 13:56:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:19.220 13:56:09 -- common/autotest_common.sh@10 -- # set +x 00:14:19.220 [2024-07-23 13:56:10.009178] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:19.220 [2024-07-23 13:56:10.009220] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.220 EAL: No free 2048 kB hugepages reported on node 1 00:14:19.220 [2024-07-23 13:56:10.068674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:19.220 [2024-07-23 13:56:10.144803] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:19.220 [2024-07-23 13:56:10.144915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.220 [2024-07-23 13:56:10.144924] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.220 [2024-07-23 13:56:10.144930] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.220 [2024-07-23 13:56:10.144975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.220 [2024-07-23 13:56:10.144978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.786 13:56:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:19.786 13:56:10 -- common/autotest_common.sh@852 -- # return 0 00:14:19.786 13:56:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:19.786 13:56:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:19.786 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 13:56:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:20.044 13:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.044 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 [2024-07-23 13:56:10.833465] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:20.044 13:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:20.044 13:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.044 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 13:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.044 13:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.044 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 [2024-07-23 13:56:10.849613] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.044 13:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:20.044 13:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.044 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 NULL1 00:14:20.044 13:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:20.044 13:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.044 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 Delay0 00:14:20.044 13:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.044 13:56:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:20.044 13:56:10 -- common/autotest_common.sh@10 -- # set +x 00:14:20.044 13:56:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@28 -- # perf_pid=3209608 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:20.044 13:56:10 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:20.044 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.044 [2024-07-23 13:56:10.924236] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:21.944 13:56:12 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:21.944 13:56:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:21.944 13:56:12 -- common/autotest_common.sh@10 -- # set +x 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 [2024-07-23 13:56:13.096480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233b8f0 is same with the state(5) to be set 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 starting I/O failed: -6 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 [2024-07-23 13:56:13.096826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9c0400bf20 is same with the state(5) to be set 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Write completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.203 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Read completed with error (sct=0, sc=8) 00:14:22.204 Write completed with error (sct=0, sc=8) 00:14:23.138 [2024-07-23 13:56:14.062677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2332910 is same with the state(5) to be set 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 [2024-07-23 13:56:14.098783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9c0400c1d0 is same with the state(5) to be set 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 [2024-07-23 13:56:14.099106] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2343e40 is same with the state(5) to be set 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 [2024-07-23 13:56:14.099272] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233bba0 is same with the state(5) to be set 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Read completed with error (sct=0, sc=8) 00:14:23.138 Write completed with error (sct=0, sc=8) 00:14:23.138 [2024-07-23 13:56:14.099416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233b640 is same with the state(5) to be set 00:14:23.138 [2024-07-23 13:56:14.100016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2332910 (9): Bad file descriptor 00:14:23.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:23.138 13:56:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.138 13:56:14 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:23.138 13:56:14 -- target/delete_subsystem.sh@35 -- # kill -0 3209608 00:14:23.138 13:56:14 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:23.138 Initializing NVMe Controllers 00:14:23.138 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:23.138 Controller IO queue size 128, less than required. 00:14:23.138 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:23.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:23.138 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:23.138 Initialization complete. Launching workers. 00:14:23.138 ======================================================== 00:14:23.139 Latency(us) 00:14:23.139 Device Information : IOPS MiB/s Average min max 00:14:23.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 192.99 0.09 946036.18 529.09 1012748.85 00:14:23.139 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.78 0.08 875859.85 224.94 1013200.25 00:14:23.139 ======================================================== 00:14:23.139 Total : 348.77 0.17 914691.42 224.94 1013200.25 00:14:23.139 00:14:23.705 13:56:14 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:23.705 13:56:14 -- target/delete_subsystem.sh@35 -- # kill -0 3209608 00:14:23.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3209608) - No such process 00:14:23.705 13:56:14 -- target/delete_subsystem.sh@45 -- # NOT wait 3209608 00:14:23.705 13:56:14 -- common/autotest_common.sh@640 -- # local es=0 00:14:23.705 13:56:14 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 3209608 00:14:23.705 13:56:14 -- common/autotest_common.sh@628 -- # local arg=wait 00:14:23.705 13:56:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:23.705 13:56:14 -- common/autotest_common.sh@632 -- # type -t wait 00:14:23.705 13:56:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:23.705 13:56:14 -- common/autotest_common.sh@643 -- # wait 3209608 00:14:23.705 13:56:14 -- common/autotest_common.sh@643 -- # es=1 00:14:23.705 13:56:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:23.705 13:56:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:23.706 13:56:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.706 13:56:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.706 13:56:14 -- common/autotest_common.sh@10 -- # set +x 00:14:23.706 13:56:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.706 13:56:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.706 13:56:14 -- common/autotest_common.sh@10 -- # set +x 00:14:23.706 [2024-07-23 13:56:14.631805] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.706 13:56:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.706 13:56:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:23.706 13:56:14 -- common/autotest_common.sh@10 -- # set +x 00:14:23.706 13:56:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@54 -- # perf_pid=3210301 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:23.706 13:56:14 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:23.706 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.706 [2024-07-23 13:56:14.686431] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:24.272 13:56:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:24.272 13:56:15 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:24.272 13:56:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:24.838 13:56:15 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:24.838 13:56:15 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:24.838 13:56:15 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.404 13:56:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.404 13:56:16 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:25.404 13:56:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:25.662 13:56:16 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:25.662 13:56:16 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:25.662 13:56:16 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.228 13:56:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.228 13:56:17 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:26.228 13:56:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:26.795 13:56:17 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:26.795 13:56:17 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:26.795 13:56:17 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.053 Initializing NVMe Controllers 00:14:27.053 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:27.053 Controller IO queue size 128, less than required. 00:14:27.053 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:27.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:27.053 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:27.053 Initialization complete. Launching workers. 00:14:27.053 ======================================================== 00:14:27.053 Latency(us) 00:14:27.053 Device Information : IOPS MiB/s Average min max 00:14:27.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003494.01 1000356.77 1041508.10 00:14:27.053 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005285.85 1000429.91 1012227.27 00:14:27.053 ======================================================== 00:14:27.053 Total : 256.00 0.12 1004389.93 1000356.77 1041508.10 00:14:27.053 00:14:27.312 13:56:18 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.312 13:56:18 -- target/delete_subsystem.sh@57 -- # kill -0 3210301 00:14:27.312 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3210301) - No such process 00:14:27.312 13:56:18 -- target/delete_subsystem.sh@67 -- # wait 3210301 00:14:27.312 13:56:18 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:27.312 13:56:18 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:27.312 13:56:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:27.312 13:56:18 -- nvmf/common.sh@116 -- # sync 00:14:27.312 13:56:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:27.312 13:56:18 -- nvmf/common.sh@119 -- # set +e 00:14:27.312 13:56:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:27.312 13:56:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:27.312 rmmod nvme_tcp 00:14:27.312 rmmod nvme_fabrics 00:14:27.312 rmmod nvme_keyring 00:14:27.312 13:56:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:27.312 13:56:18 -- nvmf/common.sh@123 -- # set -e 00:14:27.312 13:56:18 -- nvmf/common.sh@124 -- # return 0 00:14:27.312 13:56:18 -- nvmf/common.sh@477 -- # '[' -n 3209575 ']' 00:14:27.312 13:56:18 -- nvmf/common.sh@478 -- # killprocess 3209575 00:14:27.312 13:56:18 -- common/autotest_common.sh@926 -- # '[' -z 3209575 ']' 00:14:27.312 13:56:18 -- common/autotest_common.sh@930 -- # kill -0 3209575 00:14:27.312 13:56:18 -- common/autotest_common.sh@931 -- # uname 00:14:27.312 13:56:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:27.312 13:56:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3209575 00:14:27.312 13:56:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:27.312 13:56:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:27.312 13:56:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3209575' 00:14:27.312 killing process with pid 3209575 00:14:27.312 13:56:18 -- common/autotest_common.sh@945 -- # kill 3209575 00:14:27.312 13:56:18 -- common/autotest_common.sh@950 -- # wait 3209575 00:14:27.570 13:56:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:27.570 13:56:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:27.570 13:56:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:27.570 13:56:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:27.570 13:56:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:27.570 13:56:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.570 13:56:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.570 13:56:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.104 13:56:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:30.104 00:14:30.104 real 0m15.931s 00:14:30.104 user 0m30.293s 00:14:30.104 sys 0m4.674s 00:14:30.104 13:56:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:30.104 13:56:20 -- common/autotest_common.sh@10 -- # set +x 00:14:30.104 ************************************ 00:14:30.104 END TEST nvmf_delete_subsystem 00:14:30.104 ************************************ 00:14:30.104 13:56:20 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:14:30.104 13:56:20 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.104 13:56:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:30.104 13:56:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:30.104 13:56:20 -- common/autotest_common.sh@10 -- # set +x 00:14:30.104 ************************************ 00:14:30.104 START TEST nvmf_nvme_cli 00:14:30.104 ************************************ 00:14:30.104 13:56:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.105 * Looking for test storage... 00:14:30.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:30.105 13:56:20 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.105 13:56:20 -- nvmf/common.sh@7 -- # uname -s 00:14:30.105 13:56:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.105 13:56:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.105 13:56:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.105 13:56:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.105 13:56:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.105 13:56:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.105 13:56:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.105 13:56:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.105 13:56:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.105 13:56:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.105 13:56:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.105 13:56:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:30.105 13:56:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.105 13:56:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.105 13:56:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:30.105 13:56:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:30.105 13:56:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.105 13:56:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.105 13:56:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.105 13:56:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.105 13:56:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.105 13:56:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.105 13:56:20 -- paths/export.sh@5 -- # export PATH 00:14:30.105 13:56:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.105 13:56:20 -- nvmf/common.sh@46 -- # : 0 00:14:30.105 13:56:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:30.105 13:56:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:30.105 13:56:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:30.105 13:56:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.105 13:56:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.105 13:56:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:30.105 13:56:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:30.105 13:56:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:30.105 13:56:20 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:30.105 13:56:20 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:30.105 13:56:20 -- target/nvme_cli.sh@14 -- # devs=() 00:14:30.105 13:56:20 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:30.105 13:56:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:30.105 13:56:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.105 13:56:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:30.105 13:56:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:30.105 13:56:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:30.105 13:56:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.105 13:56:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.105 13:56:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.105 13:56:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:30.105 13:56:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:30.105 13:56:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:30.105 13:56:20 -- common/autotest_common.sh@10 -- # set +x 00:14:35.416 13:56:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:35.416 13:56:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:35.416 13:56:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:35.416 13:56:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:35.416 13:56:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:35.416 13:56:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:35.416 13:56:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:35.416 13:56:26 -- nvmf/common.sh@294 -- # net_devs=() 00:14:35.416 13:56:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:35.416 13:56:26 -- nvmf/common.sh@295 -- # e810=() 00:14:35.416 13:56:26 -- nvmf/common.sh@295 -- # local -ga e810 00:14:35.416 13:56:26 -- nvmf/common.sh@296 -- # x722=() 00:14:35.416 13:56:26 -- nvmf/common.sh@296 -- # local -ga x722 00:14:35.416 13:56:26 -- nvmf/common.sh@297 -- # mlx=() 00:14:35.416 13:56:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:35.416 13:56:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.416 13:56:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:35.416 13:56:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:35.416 13:56:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:35.416 13:56:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.416 13:56:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:35.416 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:35.416 13:56:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:35.416 13:56:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:35.416 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:35.416 13:56:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:35.416 13:56:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:35.416 13:56:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.416 13:56:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.416 13:56:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.416 13:56:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.416 13:56:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:35.416 Found net devices under 0000:86:00.0: cvl_0_0 00:14:35.417 13:56:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.417 13:56:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:35.417 13:56:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.417 13:56:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:35.417 13:56:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.417 13:56:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:35.417 Found net devices under 0000:86:00.1: cvl_0_1 00:14:35.417 13:56:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.417 13:56:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:35.417 13:56:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:35.417 13:56:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:35.417 13:56:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:35.417 13:56:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:35.417 13:56:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.417 13:56:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.417 13:56:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.417 13:56:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:35.417 13:56:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.417 13:56:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.417 13:56:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:35.417 13:56:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.417 13:56:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.417 13:56:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:35.417 13:56:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:35.417 13:56:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.417 13:56:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.417 13:56:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.417 13:56:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.417 13:56:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:35.417 13:56:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.417 13:56:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.417 13:56:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.417 13:56:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:35.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:14:35.417 00:14:35.417 --- 10.0.0.2 ping statistics --- 00:14:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.417 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:35.417 13:56:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.298 ms 00:14:35.417 00:14:35.417 --- 10.0.0.1 ping statistics --- 00:14:35.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.417 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:14:35.417 13:56:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.417 13:56:26 -- nvmf/common.sh@410 -- # return 0 00:14:35.417 13:56:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:35.417 13:56:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.417 13:56:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:35.417 13:56:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:35.417 13:56:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.417 13:56:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:35.417 13:56:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:35.417 13:56:26 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:35.417 13:56:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.417 13:56:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:35.417 13:56:26 -- common/autotest_common.sh@10 -- # set +x 00:14:35.417 13:56:26 -- nvmf/common.sh@469 -- # nvmfpid=3214474 00:14:35.417 13:56:26 -- nvmf/common.sh@470 -- # waitforlisten 3214474 00:14:35.417 13:56:26 -- common/autotest_common.sh@819 -- # '[' -z 3214474 ']' 00:14:35.417 13:56:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.417 13:56:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:35.417 13:56:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.417 13:56:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:35.417 13:56:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.417 13:56:26 -- common/autotest_common.sh@10 -- # set +x 00:14:35.676 [2024-07-23 13:56:26.468002] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:35.676 [2024-07-23 13:56:26.468060] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.676 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.676 [2024-07-23 13:56:26.527516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.676 [2024-07-23 13:56:26.605685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.676 [2024-07-23 13:56:26.605797] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.676 [2024-07-23 13:56:26.605806] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.676 [2024-07-23 13:56:26.605812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.676 [2024-07-23 13:56:26.605860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.676 [2024-07-23 13:56:26.605960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.676 [2024-07-23 13:56:26.606022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.676 [2024-07-23 13:56:26.606023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.613 13:56:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.613 13:56:27 -- common/autotest_common.sh@852 -- # return 0 00:14:36.613 13:56:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.613 13:56:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 13:56:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.613 13:56:27 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 [2024-07-23 13:56:27.303313] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 Malloc0 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 Malloc1 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 [2024-07-23 13:56:27.385185] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:36.613 13:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.613 13:56:27 -- common/autotest_common.sh@10 -- # set +x 00:14:36.613 13:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.613 13:56:27 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:36.613 00:14:36.613 Discovery Log Number of Records 2, Generation counter 2 00:14:36.613 =====Discovery Log Entry 0====== 00:14:36.613 trtype: tcp 00:14:36.613 adrfam: ipv4 00:14:36.613 subtype: current discovery subsystem 00:14:36.613 treq: not required 00:14:36.613 portid: 0 00:14:36.613 trsvcid: 4420 00:14:36.613 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:36.613 traddr: 10.0.0.2 00:14:36.613 eflags: explicit discovery connections, duplicate discovery information 00:14:36.613 sectype: none 00:14:36.613 =====Discovery Log Entry 1====== 00:14:36.613 trtype: tcp 00:14:36.613 adrfam: ipv4 00:14:36.613 subtype: nvme subsystem 00:14:36.613 treq: not required 00:14:36.613 portid: 0 00:14:36.613 trsvcid: 4420 00:14:36.613 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:36.613 traddr: 10.0.0.2 00:14:36.613 eflags: none 00:14:36.613 sectype: none 00:14:36.613 13:56:27 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:36.613 13:56:27 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:36.613 13:56:27 -- nvmf/common.sh@510 -- # local dev _ 00:14:36.613 13:56:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:36.613 13:56:27 -- nvmf/common.sh@509 -- # nvme list 00:14:36.613 13:56:27 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:36.613 13:56:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:36.613 13:56:27 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:36.613 13:56:27 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:36.613 13:56:27 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:36.613 13:56:27 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:37.990 13:56:28 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:37.990 13:56:28 -- common/autotest_common.sh@1177 -- # local i=0 00:14:37.990 13:56:28 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.990 13:56:28 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:14:37.990 13:56:28 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:14:37.990 13:56:28 -- common/autotest_common.sh@1184 -- # sleep 2 00:14:39.893 13:56:30 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:14:39.893 13:56:30 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:14:39.893 13:56:30 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.893 13:56:30 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:14:39.893 13:56:30 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.893 13:56:30 -- common/autotest_common.sh@1187 -- # return 0 00:14:39.893 13:56:30 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:39.893 13:56:30 -- nvmf/common.sh@510 -- # local dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@509 -- # nvme list 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:39.893 /dev/nvme0n1 ]] 00:14:39.893 13:56:30 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:39.893 13:56:30 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:39.893 13:56:30 -- nvmf/common.sh@510 -- # local dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@509 -- # nvme list 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:39.893 13:56:30 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:14:39.893 13:56:30 -- nvmf/common.sh@512 -- # read -r dev _ 00:14:39.893 13:56:30 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:39.893 13:56:30 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.893 13:56:30 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.893 13:56:30 -- common/autotest_common.sh@1198 -- # local i=0 00:14:39.893 13:56:30 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:39.893 13:56:30 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.893 13:56:30 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:39.894 13:56:30 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.894 13:56:30 -- common/autotest_common.sh@1210 -- # return 0 00:14:39.894 13:56:30 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:39.894 13:56:30 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:39.894 13:56:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.894 13:56:30 -- common/autotest_common.sh@10 -- # set +x 00:14:39.894 13:56:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.894 13:56:30 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:39.894 13:56:30 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:39.894 13:56:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:39.894 13:56:30 -- nvmf/common.sh@116 -- # sync 00:14:39.894 13:56:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:39.894 13:56:30 -- nvmf/common.sh@119 -- # set +e 00:14:39.894 13:56:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:39.894 13:56:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:39.894 rmmod nvme_tcp 00:14:39.894 rmmod nvme_fabrics 00:14:39.894 rmmod nvme_keyring 00:14:39.894 13:56:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:39.894 13:56:30 -- nvmf/common.sh@123 -- # set -e 00:14:39.894 13:56:30 -- nvmf/common.sh@124 -- # return 0 00:14:39.894 13:56:30 -- nvmf/common.sh@477 -- # '[' -n 3214474 ']' 00:14:39.894 13:56:30 -- nvmf/common.sh@478 -- # killprocess 3214474 00:14:39.894 13:56:30 -- common/autotest_common.sh@926 -- # '[' -z 3214474 ']' 00:14:39.894 13:56:30 -- common/autotest_common.sh@930 -- # kill -0 3214474 00:14:39.894 13:56:30 -- common/autotest_common.sh@931 -- # uname 00:14:39.894 13:56:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:39.894 13:56:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3214474 00:14:40.153 13:56:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:40.153 13:56:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:40.153 13:56:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3214474' 00:14:40.153 killing process with pid 3214474 00:14:40.153 13:56:30 -- common/autotest_common.sh@945 -- # kill 3214474 00:14:40.153 13:56:30 -- common/autotest_common.sh@950 -- # wait 3214474 00:14:40.413 13:56:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:40.413 13:56:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:40.413 13:56:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:40.413 13:56:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:40.413 13:56:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:40.413 13:56:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.413 13:56:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:40.413 13:56:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.347 13:56:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:42.347 00:14:42.347 real 0m12.651s 00:14:42.347 user 0m19.711s 00:14:42.347 sys 0m4.835s 00:14:42.347 13:56:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:42.347 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:14:42.347 ************************************ 00:14:42.347 END TEST nvmf_nvme_cli 00:14:42.347 ************************************ 00:14:42.347 13:56:33 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:14:42.347 13:56:33 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:42.347 13:56:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:42.347 13:56:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:42.347 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:14:42.347 ************************************ 00:14:42.347 START TEST nvmf_host_management 00:14:42.347 ************************************ 00:14:42.347 13:56:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:42.347 * Looking for test storage... 00:14:42.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.605 13:56:33 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.605 13:56:33 -- nvmf/common.sh@7 -- # uname -s 00:14:42.605 13:56:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.605 13:56:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.605 13:56:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.605 13:56:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.605 13:56:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.605 13:56:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.605 13:56:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.605 13:56:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.606 13:56:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.606 13:56:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.606 13:56:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:42.606 13:56:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:42.606 13:56:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.606 13:56:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.606 13:56:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.606 13:56:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.606 13:56:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.606 13:56:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.606 13:56:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.606 13:56:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.606 13:56:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.606 13:56:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.606 13:56:33 -- paths/export.sh@5 -- # export PATH 00:14:42.606 13:56:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.606 13:56:33 -- nvmf/common.sh@46 -- # : 0 00:14:42.606 13:56:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:42.606 13:56:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:42.606 13:56:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:42.606 13:56:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.606 13:56:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.606 13:56:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:42.606 13:56:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:42.606 13:56:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:42.606 13:56:33 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.606 13:56:33 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.606 13:56:33 -- target/host_management.sh@104 -- # nvmftestinit 00:14:42.606 13:56:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:42.606 13:56:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.606 13:56:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:42.606 13:56:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:42.606 13:56:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:42.606 13:56:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.606 13:56:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:42.606 13:56:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.606 13:56:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:42.606 13:56:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:42.606 13:56:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:42.606 13:56:33 -- common/autotest_common.sh@10 -- # set +x 00:14:47.878 13:56:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:47.878 13:56:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:47.878 13:56:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:47.878 13:56:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:47.879 13:56:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:47.879 13:56:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:47.879 13:56:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:47.879 13:56:38 -- nvmf/common.sh@294 -- # net_devs=() 00:14:47.879 13:56:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:47.879 13:56:38 -- nvmf/common.sh@295 -- # e810=() 00:14:47.879 13:56:38 -- nvmf/common.sh@295 -- # local -ga e810 00:14:47.879 13:56:38 -- nvmf/common.sh@296 -- # x722=() 00:14:47.879 13:56:38 -- nvmf/common.sh@296 -- # local -ga x722 00:14:47.879 13:56:38 -- nvmf/common.sh@297 -- # mlx=() 00:14:47.879 13:56:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:47.879 13:56:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.879 13:56:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:47.879 13:56:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:47.879 13:56:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:47.879 13:56:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:47.879 13:56:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:47.879 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:47.879 13:56:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:47.879 13:56:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:47.879 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:47.879 13:56:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:47.879 13:56:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:47.879 13:56:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.879 13:56:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:47.879 13:56:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.879 13:56:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:47.879 Found net devices under 0000:86:00.0: cvl_0_0 00:14:47.879 13:56:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.879 13:56:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:47.879 13:56:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.879 13:56:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:47.879 13:56:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.879 13:56:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:47.879 Found net devices under 0000:86:00.1: cvl_0_1 00:14:47.879 13:56:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.879 13:56:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:47.879 13:56:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:47.879 13:56:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:47.879 13:56:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:47.879 13:56:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.879 13:56:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.879 13:56:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.879 13:56:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:47.879 13:56:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.879 13:56:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.879 13:56:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:47.879 13:56:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.879 13:56:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.879 13:56:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:47.879 13:56:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:47.879 13:56:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.879 13:56:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.879 13:56:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.879 13:56:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.879 13:56:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:47.879 13:56:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.879 13:56:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.879 13:56:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.879 13:56:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:47.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:14:47.879 00:14:47.879 --- 10.0.0.2 ping statistics --- 00:14:47.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.879 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:14:47.879 13:56:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:14:47.879 00:14:47.879 --- 10.0.0.1 ping statistics --- 00:14:47.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.879 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:14:47.879 13:56:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.879 13:56:38 -- nvmf/common.sh@410 -- # return 0 00:14:47.879 13:56:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:47.879 13:56:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.879 13:56:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:47.880 13:56:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:47.880 13:56:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.880 13:56:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:47.880 13:56:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:48.138 13:56:38 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:48.138 13:56:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:48.138 13:56:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:48.138 13:56:38 -- common/autotest_common.sh@10 -- # set +x 00:14:48.138 ************************************ 00:14:48.138 START TEST nvmf_host_management 00:14:48.138 ************************************ 00:14:48.138 13:56:38 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:14:48.138 13:56:38 -- target/host_management.sh@69 -- # starttarget 00:14:48.138 13:56:38 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:48.138 13:56:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:48.138 13:56:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:48.138 13:56:38 -- common/autotest_common.sh@10 -- # set +x 00:14:48.138 13:56:38 -- nvmf/common.sh@469 -- # nvmfpid=3218689 00:14:48.138 13:56:38 -- nvmf/common.sh@470 -- # waitforlisten 3218689 00:14:48.138 13:56:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:48.138 13:56:38 -- common/autotest_common.sh@819 -- # '[' -z 3218689 ']' 00:14:48.138 13:56:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.138 13:56:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:48.138 13:56:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.138 13:56:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:48.138 13:56:38 -- common/autotest_common.sh@10 -- # set +x 00:14:48.138 [2024-07-23 13:56:38.964031] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:48.139 [2024-07-23 13:56:38.964089] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.139 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.139 [2024-07-23 13:56:39.020325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.139 [2024-07-23 13:56:39.098503] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:48.139 [2024-07-23 13:56:39.098610] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.139 [2024-07-23 13:56:39.098617] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.139 [2024-07-23 13:56:39.098624] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.139 [2024-07-23 13:56:39.098719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.139 [2024-07-23 13:56:39.098802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.139 [2024-07-23 13:56:39.098910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.139 [2024-07-23 13:56:39.098911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:49.073 13:56:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:49.073 13:56:39 -- common/autotest_common.sh@852 -- # return 0 00:14:49.073 13:56:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:49.073 13:56:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:49.073 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.073 13:56:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:49.073 13:56:39 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:49.073 13:56:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.073 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.073 [2024-07-23 13:56:39.808383] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:49.073 13:56:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.073 13:56:39 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:49.073 13:56:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:49.073 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.073 13:56:39 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:49.073 13:56:39 -- target/host_management.sh@23 -- # cat 00:14:49.073 13:56:39 -- target/host_management.sh@30 -- # rpc_cmd 00:14:49.073 13:56:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.073 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.073 Malloc0 00:14:49.073 [2024-07-23 13:56:39.867873] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.073 13:56:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.073 13:56:39 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:49.073 13:56:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:49.073 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.073 13:56:39 -- target/host_management.sh@73 -- # perfpid=3218887 00:14:49.073 13:56:39 -- target/host_management.sh@74 -- # waitforlisten 3218887 /var/tmp/bdevperf.sock 00:14:49.073 13:56:39 -- common/autotest_common.sh@819 -- # '[' -z 3218887 ']' 00:14:49.073 13:56:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:49.073 13:56:39 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:49.073 13:56:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:49.073 13:56:39 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:49.073 13:56:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:49.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:49.073 13:56:39 -- nvmf/common.sh@520 -- # config=() 00:14:49.073 13:56:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:49.073 13:56:39 -- nvmf/common.sh@520 -- # local subsystem config 00:14:49.073 13:56:39 -- common/autotest_common.sh@10 -- # set +x 00:14:49.073 13:56:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:49.073 13:56:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:49.073 { 00:14:49.073 "params": { 00:14:49.073 "name": "Nvme$subsystem", 00:14:49.073 "trtype": "$TEST_TRANSPORT", 00:14:49.073 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:49.073 "adrfam": "ipv4", 00:14:49.073 "trsvcid": "$NVMF_PORT", 00:14:49.073 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:49.073 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:49.073 "hdgst": ${hdgst:-false}, 00:14:49.073 "ddgst": ${ddgst:-false} 00:14:49.073 }, 00:14:49.073 "method": "bdev_nvme_attach_controller" 00:14:49.073 } 00:14:49.073 EOF 00:14:49.073 )") 00:14:49.073 13:56:39 -- nvmf/common.sh@542 -- # cat 00:14:49.073 13:56:39 -- nvmf/common.sh@544 -- # jq . 00:14:49.073 13:56:39 -- nvmf/common.sh@545 -- # IFS=, 00:14:49.073 13:56:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:49.073 "params": { 00:14:49.073 "name": "Nvme0", 00:14:49.073 "trtype": "tcp", 00:14:49.073 "traddr": "10.0.0.2", 00:14:49.073 "adrfam": "ipv4", 00:14:49.073 "trsvcid": "4420", 00:14:49.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:49.074 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:49.074 "hdgst": false, 00:14:49.074 "ddgst": false 00:14:49.074 }, 00:14:49.074 "method": "bdev_nvme_attach_controller" 00:14:49.074 }' 00:14:49.074 [2024-07-23 13:56:39.955637] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:49.074 [2024-07-23 13:56:39.955682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3218887 ] 00:14:49.074 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.074 [2024-07-23 13:56:40.011147] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:49.332 [2024-07-23 13:56:40.095320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.332 Running I/O for 10 seconds... 00:14:49.901 13:56:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:49.901 13:56:40 -- common/autotest_common.sh@852 -- # return 0 00:14:49.901 13:56:40 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:49.901 13:56:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.901 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:49.901 13:56:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.901 13:56:40 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:49.901 13:56:40 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:49.901 13:56:40 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:49.901 13:56:40 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:49.901 13:56:40 -- target/host_management.sh@52 -- # local ret=1 00:14:49.901 13:56:40 -- target/host_management.sh@53 -- # local i 00:14:49.901 13:56:40 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:49.901 13:56:40 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:49.901 13:56:40 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:49.901 13:56:40 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:49.901 13:56:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.901 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:49.901 13:56:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.901 13:56:40 -- target/host_management.sh@55 -- # read_io_count=1282 00:14:49.901 13:56:40 -- target/host_management.sh@58 -- # '[' 1282 -ge 100 ']' 00:14:49.901 13:56:40 -- target/host_management.sh@59 -- # ret=0 00:14:49.901 13:56:40 -- target/host_management.sh@60 -- # break 00:14:49.901 13:56:40 -- target/host_management.sh@64 -- # return 0 00:14:49.901 13:56:40 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:49.901 13:56:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.901 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:49.901 [2024-07-23 13:56:40.835521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835621] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835650] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835674] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835680] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835697] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835746] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.835776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19daa40 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.836470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.901 [2024-07-23 13:56:40.836503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.836513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.901 [2024-07-23 13:56:40.836520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.836528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.901 [2024-07-23 13:56:40.836535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.836542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.901 [2024-07-23 13:56:40.836549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.836556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec6900 is same with the state(5) to be set 00:14:49.901 [2024-07-23 13:56:40.836980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.836999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.837012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.837019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.837028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.837035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.837054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.837061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.837069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.837076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.837084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.837091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.901 [2024-07-23 13:56:40.837099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.901 [2024-07-23 13:56:40.837106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:48128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:48256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:48512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:48768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:48896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.902 [2024-07-23 13:56:40.837644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.902 [2024-07-23 13:56:40.837652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.837952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:49.903 [2024-07-23 13:56:40.837958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.903 [2024-07-23 13:56:40.838035] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ec4170 was disconnected and freed. reset controller. 00:14:49.903 [2024-07-23 13:56:40.838942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:49.903 13:56:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.903 task offset: 53120 on job bdev=Nvme0n1 fails 00:14:49.903 00:14:49.903 Latency(us) 00:14:49.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.903 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:49.903 Job: Nvme0n1 ended in about 0.58 seconds with error 00:14:49.903 Verification LBA range: start 0x0 length 0x400 00:14:49.903 Nvme0n1 : 0.58 2439.44 152.46 110.02 0.00 24823.73 1339.21 51061.09 00:14:49.903 =================================================================================================================== 00:14:49.903 Total : 2439.44 152.46 110.02 0.00 24823.73 1339.21 51061.09 00:14:49.903 [2024-07-23 13:56:40.840512] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:49.903 [2024-07-23 13:56:40.840527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ec6900 (9): Bad file descriptor 00:14:49.903 13:56:40 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:49.903 13:56:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:49.903 13:56:40 -- common/autotest_common.sh@10 -- # set +x 00:14:49.903 13:56:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:49.903 13:56:40 -- target/host_management.sh@87 -- # sleep 1 00:14:49.903 [2024-07-23 13:56:40.895950] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:51.279 13:56:41 -- target/host_management.sh@91 -- # kill -9 3218887 00:14:51.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3218887) - No such process 00:14:51.279 13:56:41 -- target/host_management.sh@91 -- # true 00:14:51.279 13:56:41 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:51.279 13:56:41 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:51.279 13:56:41 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:51.279 13:56:41 -- nvmf/common.sh@520 -- # config=() 00:14:51.279 13:56:41 -- nvmf/common.sh@520 -- # local subsystem config 00:14:51.279 13:56:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:51.279 13:56:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:51.279 { 00:14:51.279 "params": { 00:14:51.279 "name": "Nvme$subsystem", 00:14:51.279 "trtype": "$TEST_TRANSPORT", 00:14:51.279 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:51.279 "adrfam": "ipv4", 00:14:51.279 "trsvcid": "$NVMF_PORT", 00:14:51.279 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:51.279 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:51.279 "hdgst": ${hdgst:-false}, 00:14:51.279 "ddgst": ${ddgst:-false} 00:14:51.279 }, 00:14:51.279 "method": "bdev_nvme_attach_controller" 00:14:51.279 } 00:14:51.279 EOF 00:14:51.279 )") 00:14:51.279 13:56:41 -- nvmf/common.sh@542 -- # cat 00:14:51.279 13:56:41 -- nvmf/common.sh@544 -- # jq . 00:14:51.279 13:56:41 -- nvmf/common.sh@545 -- # IFS=, 00:14:51.279 13:56:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:51.279 "params": { 00:14:51.279 "name": "Nvme0", 00:14:51.279 "trtype": "tcp", 00:14:51.279 "traddr": "10.0.0.2", 00:14:51.279 "adrfam": "ipv4", 00:14:51.279 "trsvcid": "4420", 00:14:51.279 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:51.279 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:51.279 "hdgst": false, 00:14:51.279 "ddgst": false 00:14:51.279 }, 00:14:51.279 "method": "bdev_nvme_attach_controller" 00:14:51.279 }' 00:14:51.279 [2024-07-23 13:56:41.901678] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:51.279 [2024-07-23 13:56:41.901730] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3219299 ] 00:14:51.279 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.279 [2024-07-23 13:56:41.954999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.279 [2024-07-23 13:56:42.026124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.537 Running I/O for 1 seconds... 00:14:52.474 00:14:52.474 Latency(us) 00:14:52.474 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.474 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:52.474 Verification LBA range: start 0x0 length 0x400 00:14:52.474 Nvme0n1 : 1.01 2439.97 152.50 0.00 0.00 25927.07 3376.53 41715.09 00:14:52.474 =================================================================================================================== 00:14:52.474 Total : 2439.97 152.50 0.00 0.00 25927.07 3376.53 41715.09 00:14:52.732 13:56:43 -- target/host_management.sh@101 -- # stoptarget 00:14:52.732 13:56:43 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:52.732 13:56:43 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:52.732 13:56:43 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:52.732 13:56:43 -- target/host_management.sh@40 -- # nvmftestfini 00:14:52.732 13:56:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.732 13:56:43 -- nvmf/common.sh@116 -- # sync 00:14:52.732 13:56:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.732 13:56:43 -- nvmf/common.sh@119 -- # set +e 00:14:52.732 13:56:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.732 13:56:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.732 rmmod nvme_tcp 00:14:52.732 rmmod nvme_fabrics 00:14:52.732 rmmod nvme_keyring 00:14:52.732 13:56:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.732 13:56:43 -- nvmf/common.sh@123 -- # set -e 00:14:52.732 13:56:43 -- nvmf/common.sh@124 -- # return 0 00:14:52.732 13:56:43 -- nvmf/common.sh@477 -- # '[' -n 3218689 ']' 00:14:52.732 13:56:43 -- nvmf/common.sh@478 -- # killprocess 3218689 00:14:52.732 13:56:43 -- common/autotest_common.sh@926 -- # '[' -z 3218689 ']' 00:14:52.732 13:56:43 -- common/autotest_common.sh@930 -- # kill -0 3218689 00:14:52.732 13:56:43 -- common/autotest_common.sh@931 -- # uname 00:14:52.732 13:56:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.732 13:56:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3218689 00:14:52.732 13:56:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:52.732 13:56:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:52.732 13:56:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3218689' 00:14:52.732 killing process with pid 3218689 00:14:52.732 13:56:43 -- common/autotest_common.sh@945 -- # kill 3218689 00:14:52.732 13:56:43 -- common/autotest_common.sh@950 -- # wait 3218689 00:14:52.991 [2024-07-23 13:56:43.882002] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:52.991 13:56:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.991 13:56:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.991 13:56:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.991 13:56:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.991 13:56:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.991 13:56:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.991 13:56:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.991 13:56:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.963 13:56:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:54.963 00:14:54.963 real 0m7.053s 00:14:54.963 user 0m21.705s 00:14:54.963 sys 0m1.144s 00:14:54.963 13:56:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.963 13:56:45 -- common/autotest_common.sh@10 -- # set +x 00:14:54.963 ************************************ 00:14:54.963 END TEST nvmf_host_management 00:14:54.963 ************************************ 00:14:55.222 13:56:46 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:55.222 00:14:55.222 real 0m12.718s 00:14:55.222 user 0m23.318s 00:14:55.222 sys 0m5.230s 00:14:55.222 13:56:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:55.222 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:55.222 ************************************ 00:14:55.222 END TEST nvmf_host_management 00:14:55.222 ************************************ 00:14:55.222 13:56:46 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:55.222 13:56:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:55.222 13:56:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:55.222 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:14:55.222 ************************************ 00:14:55.222 START TEST nvmf_lvol 00:14:55.222 ************************************ 00:14:55.222 13:56:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:55.222 * Looking for test storage... 00:14:55.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:55.222 13:56:46 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:55.222 13:56:46 -- nvmf/common.sh@7 -- # uname -s 00:14:55.222 13:56:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:55.222 13:56:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:55.222 13:56:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:55.222 13:56:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:55.222 13:56:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:55.222 13:56:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:55.222 13:56:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:55.223 13:56:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:55.223 13:56:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:55.223 13:56:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:55.223 13:56:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.223 13:56:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:55.223 13:56:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:55.223 13:56:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:55.223 13:56:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:55.223 13:56:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:55.223 13:56:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:55.223 13:56:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:55.223 13:56:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:55.223 13:56:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.223 13:56:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.223 13:56:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.223 13:56:46 -- paths/export.sh@5 -- # export PATH 00:14:55.223 13:56:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:55.223 13:56:46 -- nvmf/common.sh@46 -- # : 0 00:14:55.223 13:56:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:55.223 13:56:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:55.223 13:56:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:55.223 13:56:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:55.223 13:56:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:55.223 13:56:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:55.223 13:56:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:55.223 13:56:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:55.223 13:56:46 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:55.223 13:56:46 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:55.223 13:56:46 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:55.223 13:56:46 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:55.223 13:56:46 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:55.223 13:56:46 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:55.223 13:56:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:55.223 13:56:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:55.223 13:56:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:55.223 13:56:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:55.223 13:56:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:55.223 13:56:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:55.223 13:56:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:55.223 13:56:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:55.223 13:56:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:55.223 13:56:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:55.223 13:56:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:55.223 13:56:46 -- common/autotest_common.sh@10 -- # set +x 00:15:00.496 13:56:51 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:00.496 13:56:51 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:00.496 13:56:51 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:00.496 13:56:51 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:00.496 13:56:51 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:00.496 13:56:51 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:00.496 13:56:51 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:00.496 13:56:51 -- nvmf/common.sh@294 -- # net_devs=() 00:15:00.496 13:56:51 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:00.496 13:56:51 -- nvmf/common.sh@295 -- # e810=() 00:15:00.496 13:56:51 -- nvmf/common.sh@295 -- # local -ga e810 00:15:00.496 13:56:51 -- nvmf/common.sh@296 -- # x722=() 00:15:00.496 13:56:51 -- nvmf/common.sh@296 -- # local -ga x722 00:15:00.496 13:56:51 -- nvmf/common.sh@297 -- # mlx=() 00:15:00.496 13:56:51 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:00.496 13:56:51 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.496 13:56:51 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:00.496 13:56:51 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:00.496 13:56:51 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:00.496 13:56:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:00.496 13:56:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:00.496 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:00.496 13:56:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:00.496 13:56:51 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:00.496 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:00.496 13:56:51 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:00.496 13:56:51 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:00.496 13:56:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.496 13:56:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:00.496 13:56:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.496 13:56:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:00.496 Found net devices under 0000:86:00.0: cvl_0_0 00:15:00.496 13:56:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.496 13:56:51 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:00.496 13:56:51 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.496 13:56:51 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:00.496 13:56:51 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.496 13:56:51 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:00.496 Found net devices under 0000:86:00.1: cvl_0_1 00:15:00.496 13:56:51 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.496 13:56:51 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:00.496 13:56:51 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:00.496 13:56:51 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:00.496 13:56:51 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:00.496 13:56:51 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:00.496 13:56:51 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:00.496 13:56:51 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:00.496 13:56:51 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:00.496 13:56:51 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:00.496 13:56:51 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:00.496 13:56:51 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:00.496 13:56:51 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:00.496 13:56:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:00.496 13:56:51 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:00.496 13:56:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:00.496 13:56:51 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:00.496 13:56:51 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:00.496 13:56:51 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:00.496 13:56:51 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:00.496 13:56:51 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:00.496 13:56:51 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:00.496 13:56:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:00.756 13:56:51 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:00.756 13:56:51 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:00.756 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.756 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:15:00.756 00:15:00.756 --- 10.0.0.2 ping statistics --- 00:15:00.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.756 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:15:00.756 13:56:51 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:00.756 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.756 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:15:00.756 00:15:00.756 --- 10.0.0.1 ping statistics --- 00:15:00.756 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.756 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:15:00.756 13:56:51 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.756 13:56:51 -- nvmf/common.sh@410 -- # return 0 00:15:00.756 13:56:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:00.756 13:56:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.756 13:56:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:00.756 13:56:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:00.756 13:56:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.756 13:56:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:00.756 13:56:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:00.756 13:56:51 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:00.756 13:56:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:00.756 13:56:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:00.756 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:15:00.756 13:56:51 -- nvmf/common.sh@469 -- # nvmfpid=3223001 00:15:00.756 13:56:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:00.756 13:56:51 -- nvmf/common.sh@470 -- # waitforlisten 3223001 00:15:00.756 13:56:51 -- common/autotest_common.sh@819 -- # '[' -z 3223001 ']' 00:15:00.756 13:56:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.756 13:56:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.756 13:56:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.756 13:56:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.756 13:56:51 -- common/autotest_common.sh@10 -- # set +x 00:15:00.756 [2024-07-23 13:56:51.634448] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:00.756 [2024-07-23 13:56:51.634490] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.756 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.756 [2024-07-23 13:56:51.693006] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:00.756 [2024-07-23 13:56:51.768857] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:00.756 [2024-07-23 13:56:51.768971] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.756 [2024-07-23 13:56:51.768978] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.756 [2024-07-23 13:56:51.768985] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.756 [2024-07-23 13:56:51.769020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.756 [2024-07-23 13:56:51.769050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.756 [2024-07-23 13:56:51.769050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.693 13:56:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.693 13:56:52 -- common/autotest_common.sh@852 -- # return 0 00:15:01.693 13:56:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:01.693 13:56:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:01.693 13:56:52 -- common/autotest_common.sh@10 -- # set +x 00:15:01.693 13:56:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.693 13:56:52 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:01.693 [2024-07-23 13:56:52.602568] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.693 13:56:52 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:01.951 13:56:52 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:01.951 13:56:52 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:02.210 13:56:52 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:02.210 13:56:52 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:02.210 13:56:53 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:02.471 13:56:53 -- target/nvmf_lvol.sh@29 -- # lvs=9bee2034-fae0-4495-aaae-d7b47a564861 00:15:02.471 13:56:53 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9bee2034-fae0-4495-aaae-d7b47a564861 lvol 20 00:15:02.730 13:56:53 -- target/nvmf_lvol.sh@32 -- # lvol=5e623a3d-c18c-4737-8f5d-2dca0b156ab2 00:15:02.730 13:56:53 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:02.730 13:56:53 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5e623a3d-c18c-4737-8f5d-2dca0b156ab2 00:15:02.988 13:56:53 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:03.246 [2024-07-23 13:56:54.063806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:03.246 13:56:54 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:03.504 13:56:54 -- target/nvmf_lvol.sh@42 -- # perf_pid=3223438 00:15:03.504 13:56:54 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:03.504 13:56:54 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:03.504 EAL: No free 2048 kB hugepages reported on node 1 00:15:04.440 13:56:55 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 5e623a3d-c18c-4737-8f5d-2dca0b156ab2 MY_SNAPSHOT 00:15:04.699 13:56:55 -- target/nvmf_lvol.sh@47 -- # snapshot=3f58bad5-04e0-4a96-86d8-eb5e24b45274 00:15:04.699 13:56:55 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 5e623a3d-c18c-4737-8f5d-2dca0b156ab2 30 00:15:04.699 13:56:55 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 3f58bad5-04e0-4a96-86d8-eb5e24b45274 MY_CLONE 00:15:04.958 13:56:55 -- target/nvmf_lvol.sh@49 -- # clone=78e0c89f-3ddc-4d50-82db-7e74a8c1353d 00:15:04.958 13:56:55 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 78e0c89f-3ddc-4d50-82db-7e74a8c1353d 00:15:05.526 13:56:56 -- target/nvmf_lvol.sh@53 -- # wait 3223438 00:15:15.501 Initializing NVMe Controllers 00:15:15.501 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:15.501 Controller IO queue size 128, less than required. 00:15:15.501 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:15.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:15.501 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:15.501 Initialization complete. Launching workers. 00:15:15.501 ======================================================== 00:15:15.501 Latency(us) 00:15:15.501 Device Information : IOPS MiB/s Average min max 00:15:15.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11917.82 46.55 10744.01 1662.91 63176.87 00:15:15.501 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11771.03 45.98 10875.98 2910.52 64406.41 00:15:15.501 ======================================================== 00:15:15.501 Total : 23688.85 92.53 10809.59 1662.91 64406.41 00:15:15.501 00:15:15.501 13:57:04 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:15.501 13:57:04 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5e623a3d-c18c-4737-8f5d-2dca0b156ab2 00:15:15.501 13:57:05 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9bee2034-fae0-4495-aaae-d7b47a564861 00:15:15.501 13:57:05 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:15.501 13:57:05 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:15.501 13:57:05 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:15.501 13:57:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:15.501 13:57:05 -- nvmf/common.sh@116 -- # sync 00:15:15.501 13:57:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:15.501 13:57:05 -- nvmf/common.sh@119 -- # set +e 00:15:15.501 13:57:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:15.501 13:57:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:15.501 rmmod nvme_tcp 00:15:15.501 rmmod nvme_fabrics 00:15:15.501 rmmod nvme_keyring 00:15:15.501 13:57:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:15.501 13:57:05 -- nvmf/common.sh@123 -- # set -e 00:15:15.501 13:57:05 -- nvmf/common.sh@124 -- # return 0 00:15:15.501 13:57:05 -- nvmf/common.sh@477 -- # '[' -n 3223001 ']' 00:15:15.501 13:57:05 -- nvmf/common.sh@478 -- # killprocess 3223001 00:15:15.501 13:57:05 -- common/autotest_common.sh@926 -- # '[' -z 3223001 ']' 00:15:15.501 13:57:05 -- common/autotest_common.sh@930 -- # kill -0 3223001 00:15:15.501 13:57:05 -- common/autotest_common.sh@931 -- # uname 00:15:15.501 13:57:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:15.501 13:57:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3223001 00:15:15.501 13:57:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:15.501 13:57:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:15.501 13:57:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3223001' 00:15:15.501 killing process with pid 3223001 00:15:15.501 13:57:05 -- common/autotest_common.sh@945 -- # kill 3223001 00:15:15.501 13:57:05 -- common/autotest_common.sh@950 -- # wait 3223001 00:15:15.501 13:57:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:15.501 13:57:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:15.501 13:57:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:15.501 13:57:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:15.501 13:57:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:15.501 13:57:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:15.501 13:57:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:15.501 13:57:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.877 13:57:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:16.877 00:15:16.877 real 0m21.657s 00:15:16.877 user 1m4.068s 00:15:16.877 sys 0m6.748s 00:15:16.877 13:57:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.877 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:15:16.877 ************************************ 00:15:16.877 END TEST nvmf_lvol 00:15:16.877 ************************************ 00:15:16.877 13:57:07 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:16.877 13:57:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:16.877 13:57:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:16.877 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:15:16.877 ************************************ 00:15:16.877 START TEST nvmf_lvs_grow 00:15:16.877 ************************************ 00:15:16.877 13:57:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:16.877 * Looking for test storage... 00:15:16.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.877 13:57:07 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.877 13:57:07 -- nvmf/common.sh@7 -- # uname -s 00:15:16.877 13:57:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.877 13:57:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.877 13:57:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.877 13:57:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.878 13:57:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.878 13:57:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.878 13:57:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.878 13:57:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.878 13:57:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.878 13:57:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.878 13:57:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.878 13:57:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.878 13:57:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.878 13:57:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.878 13:57:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.878 13:57:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.878 13:57:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.878 13:57:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.878 13:57:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.878 13:57:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.878 13:57:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.878 13:57:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.878 13:57:07 -- paths/export.sh@5 -- # export PATH 00:15:16.878 13:57:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.878 13:57:07 -- nvmf/common.sh@46 -- # : 0 00:15:16.878 13:57:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.878 13:57:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.878 13:57:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.878 13:57:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.878 13:57:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.878 13:57:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.878 13:57:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.878 13:57:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.878 13:57:07 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.878 13:57:07 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:16.878 13:57:07 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:16.878 13:57:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:16.878 13:57:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:16.878 13:57:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:16.878 13:57:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:16.878 13:57:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:16.878 13:57:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:16.878 13:57:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:16.878 13:57:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.878 13:57:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:16.878 13:57:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:16.878 13:57:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:16.878 13:57:07 -- common/autotest_common.sh@10 -- # set +x 00:15:23.443 13:57:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:23.443 13:57:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:23.443 13:57:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:23.443 13:57:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:23.443 13:57:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:23.443 13:57:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:23.443 13:57:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:23.443 13:57:13 -- nvmf/common.sh@294 -- # net_devs=() 00:15:23.443 13:57:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:23.443 13:57:13 -- nvmf/common.sh@295 -- # e810=() 00:15:23.443 13:57:13 -- nvmf/common.sh@295 -- # local -ga e810 00:15:23.443 13:57:13 -- nvmf/common.sh@296 -- # x722=() 00:15:23.443 13:57:13 -- nvmf/common.sh@296 -- # local -ga x722 00:15:23.443 13:57:13 -- nvmf/common.sh@297 -- # mlx=() 00:15:23.443 13:57:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:23.443 13:57:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:23.443 13:57:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:23.443 13:57:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:23.443 13:57:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:23.443 13:57:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:23.443 13:57:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:23.443 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:23.443 13:57:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:23.443 13:57:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:23.443 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:23.443 13:57:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:23.443 13:57:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:23.443 13:57:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.443 13:57:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:23.443 13:57:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.443 13:57:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:23.443 Found net devices under 0000:86:00.0: cvl_0_0 00:15:23.443 13:57:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.443 13:57:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:23.443 13:57:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:23.443 13:57:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:23.443 13:57:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:23.443 13:57:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:23.443 Found net devices under 0000:86:00.1: cvl_0_1 00:15:23.443 13:57:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:23.443 13:57:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:23.443 13:57:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:23.443 13:57:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:23.443 13:57:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:23.443 13:57:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:23.443 13:57:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:23.443 13:57:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:23.443 13:57:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:23.443 13:57:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:23.443 13:57:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:23.443 13:57:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:23.443 13:57:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:23.443 13:57:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:23.443 13:57:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:23.443 13:57:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:23.443 13:57:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:23.443 13:57:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:23.443 13:57:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:23.443 13:57:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:23.443 13:57:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:23.443 13:57:13 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:23.443 13:57:13 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:23.443 13:57:13 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:23.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:23.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:15:23.443 00:15:23.443 --- 10.0.0.2 ping statistics --- 00:15:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.443 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:15:23.443 13:57:13 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:23.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:23.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:15:23.443 00:15:23.443 --- 10.0.0.1 ping statistics --- 00:15:23.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:23.443 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:15:23.443 13:57:13 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:23.443 13:57:13 -- nvmf/common.sh@410 -- # return 0 00:15:23.443 13:57:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:23.443 13:57:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:23.443 13:57:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:23.443 13:57:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:23.443 13:57:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:23.443 13:57:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:23.443 13:57:13 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:23.443 13:57:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:23.443 13:57:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:23.443 13:57:13 -- common/autotest_common.sh@10 -- # set +x 00:15:23.443 13:57:13 -- nvmf/common.sh@469 -- # nvmfpid=3229354 00:15:23.443 13:57:13 -- nvmf/common.sh@470 -- # waitforlisten 3229354 00:15:23.443 13:57:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:23.443 13:57:13 -- common/autotest_common.sh@819 -- # '[' -z 3229354 ']' 00:15:23.443 13:57:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.443 13:57:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:23.443 13:57:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.443 13:57:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:23.443 13:57:13 -- common/autotest_common.sh@10 -- # set +x 00:15:23.443 [2024-07-23 13:57:13.656991] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:23.443 [2024-07-23 13:57:13.657034] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:23.443 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.443 [2024-07-23 13:57:13.714736] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.443 [2024-07-23 13:57:13.789100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:23.443 [2024-07-23 13:57:13.789211] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:23.443 [2024-07-23 13:57:13.789218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:23.443 [2024-07-23 13:57:13.789225] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:23.443 [2024-07-23 13:57:13.789243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.443 13:57:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:23.443 13:57:14 -- common/autotest_common.sh@852 -- # return 0 00:15:23.443 13:57:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:23.443 13:57:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:23.443 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:15:23.701 13:57:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:23.701 [2024-07-23 13:57:14.625496] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:23.701 13:57:14 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:23.701 13:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:23.701 13:57:14 -- common/autotest_common.sh@10 -- # set +x 00:15:23.701 ************************************ 00:15:23.701 START TEST lvs_grow_clean 00:15:23.701 ************************************ 00:15:23.701 13:57:14 -- common/autotest_common.sh@1104 -- # lvs_grow 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:23.701 13:57:14 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:23.959 13:57:14 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:23.959 13:57:14 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:24.216 13:57:15 -- target/nvmf_lvs_grow.sh@28 -- # lvs=78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:24.216 13:57:15 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:24.216 13:57:15 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:24.216 13:57:15 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:24.216 13:57:15 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:24.216 13:57:15 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 lvol 150 00:15:24.475 13:57:15 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c1d66297-bf73-48dc-b699-b3ac749ad5cb 00:15:24.475 13:57:15 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:24.475 13:57:15 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:24.733 [2024-07-23 13:57:15.520768] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:24.733 [2024-07-23 13:57:15.520820] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:24.733 true 00:15:24.733 13:57:15 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:24.733 13:57:15 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:24.733 13:57:15 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:24.733 13:57:15 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:24.992 13:57:15 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1d66297-bf73-48dc-b699-b3ac749ad5cb 00:15:25.250 13:57:16 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:25.250 [2024-07-23 13:57:16.170760] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:25.250 13:57:16 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.509 13:57:16 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:25.509 13:57:16 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3229861 00:15:25.509 13:57:16 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.509 13:57:16 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3229861 /var/tmp/bdevperf.sock 00:15:25.509 13:57:16 -- common/autotest_common.sh@819 -- # '[' -z 3229861 ']' 00:15:25.509 13:57:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:25.509 13:57:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.509 13:57:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:25.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:25.509 13:57:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.509 13:57:16 -- common/autotest_common.sh@10 -- # set +x 00:15:25.509 [2024-07-23 13:57:16.359936] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:25.509 [2024-07-23 13:57:16.359980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3229861 ] 00:15:25.509 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.509 [2024-07-23 13:57:16.411678] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.509 [2024-07-23 13:57:16.481345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.444 13:57:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:26.444 13:57:17 -- common/autotest_common.sh@852 -- # return 0 00:15:26.444 13:57:17 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:26.702 Nvme0n1 00:15:26.702 13:57:17 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:26.702 [ 00:15:26.702 { 00:15:26.702 "name": "Nvme0n1", 00:15:26.702 "aliases": [ 00:15:26.702 "c1d66297-bf73-48dc-b699-b3ac749ad5cb" 00:15:26.702 ], 00:15:26.702 "product_name": "NVMe disk", 00:15:26.702 "block_size": 4096, 00:15:26.702 "num_blocks": 38912, 00:15:26.702 "uuid": "c1d66297-bf73-48dc-b699-b3ac749ad5cb", 00:15:26.702 "assigned_rate_limits": { 00:15:26.702 "rw_ios_per_sec": 0, 00:15:26.702 "rw_mbytes_per_sec": 0, 00:15:26.703 "r_mbytes_per_sec": 0, 00:15:26.703 "w_mbytes_per_sec": 0 00:15:26.703 }, 00:15:26.703 "claimed": false, 00:15:26.703 "zoned": false, 00:15:26.703 "supported_io_types": { 00:15:26.703 "read": true, 00:15:26.703 "write": true, 00:15:26.703 "unmap": true, 00:15:26.703 "write_zeroes": true, 00:15:26.703 "flush": true, 00:15:26.703 "reset": true, 00:15:26.703 "compare": true, 00:15:26.703 "compare_and_write": true, 00:15:26.703 "abort": true, 00:15:26.703 "nvme_admin": true, 00:15:26.703 "nvme_io": true 00:15:26.703 }, 00:15:26.703 "driver_specific": { 00:15:26.703 "nvme": [ 00:15:26.703 { 00:15:26.703 "trid": { 00:15:26.703 "trtype": "TCP", 00:15:26.703 "adrfam": "IPv4", 00:15:26.703 "traddr": "10.0.0.2", 00:15:26.703 "trsvcid": "4420", 00:15:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:26.703 }, 00:15:26.703 "ctrlr_data": { 00:15:26.703 "cntlid": 1, 00:15:26.703 "vendor_id": "0x8086", 00:15:26.703 "model_number": "SPDK bdev Controller", 00:15:26.703 "serial_number": "SPDK0", 00:15:26.703 "firmware_revision": "24.01.1", 00:15:26.703 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:26.703 "oacs": { 00:15:26.703 "security": 0, 00:15:26.703 "format": 0, 00:15:26.703 "firmware": 0, 00:15:26.703 "ns_manage": 0 00:15:26.703 }, 00:15:26.703 "multi_ctrlr": true, 00:15:26.703 "ana_reporting": false 00:15:26.703 }, 00:15:26.703 "vs": { 00:15:26.703 "nvme_version": "1.3" 00:15:26.703 }, 00:15:26.703 "ns_data": { 00:15:26.703 "id": 1, 00:15:26.703 "can_share": true 00:15:26.703 } 00:15:26.703 } 00:15:26.703 ], 00:15:26.703 "mp_policy": "active_passive" 00:15:26.703 } 00:15:26.703 } 00:15:26.703 ] 00:15:26.703 13:57:17 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3230099 00:15:26.703 13:57:17 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:26.703 13:57:17 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:26.963 Running I/O for 10 seconds... 00:15:27.900 Latency(us) 00:15:27.900 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:27.900 Nvme0n1 : 1.00 22395.00 87.48 0.00 0.00 0.00 0.00 0.00 00:15:27.900 =================================================================================================================== 00:15:27.900 Total : 22395.00 87.48 0.00 0.00 0.00 0.00 0.00 00:15:27.900 00:15:28.838 13:57:19 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:28.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:28.838 Nvme0n1 : 2.00 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:15:28.838 =================================================================================================================== 00:15:28.838 Total : 22669.50 88.55 0.00 0.00 0.00 0.00 0.00 00:15:28.838 00:15:28.838 true 00:15:29.099 13:57:19 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:29.099 13:57:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:29.099 13:57:20 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:29.099 13:57:20 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:29.099 13:57:20 -- target/nvmf_lvs_grow.sh@65 -- # wait 3230099 00:15:30.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.037 Nvme0n1 : 3.00 22625.00 88.38 0.00 0.00 0.00 0.00 0.00 00:15:30.037 =================================================================================================================== 00:15:30.037 Total : 22625.00 88.38 0.00 0.00 0.00 0.00 0.00 00:15:30.037 00:15:30.973 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:30.973 Nvme0n1 : 4.00 22740.75 88.83 0.00 0.00 0.00 0.00 0.00 00:15:30.973 =================================================================================================================== 00:15:30.973 Total : 22740.75 88.83 0.00 0.00 0.00 0.00 0.00 00:15:30.973 00:15:31.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.908 Nvme0n1 : 5.00 22810.20 89.10 0.00 0.00 0.00 0.00 0.00 00:15:31.908 =================================================================================================================== 00:15:31.908 Total : 22810.20 89.10 0.00 0.00 0.00 0.00 0.00 00:15:31.908 00:15:32.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.856 Nvme0n1 : 6.00 22871.33 89.34 0.00 0.00 0.00 0.00 0.00 00:15:32.856 =================================================================================================================== 00:15:32.856 Total : 22871.33 89.34 0.00 0.00 0.00 0.00 0.00 00:15:32.856 00:15:33.792 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.792 Nvme0n1 : 7.00 22917.14 89.52 0.00 0.00 0.00 0.00 0.00 00:15:33.792 =================================================================================================================== 00:15:33.792 Total : 22917.14 89.52 0.00 0.00 0.00 0.00 0.00 00:15:33.792 00:15:35.169 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.169 Nvme0n1 : 8.00 23027.62 89.95 0.00 0.00 0.00 0.00 0.00 00:15:35.169 =================================================================================================================== 00:15:35.169 Total : 23027.62 89.95 0.00 0.00 0.00 0.00 0.00 00:15:35.169 00:15:36.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.105 Nvme0n1 : 9.00 23149.89 90.43 0.00 0.00 0.00 0.00 0.00 00:15:36.105 =================================================================================================================== 00:15:36.105 Total : 23149.89 90.43 0.00 0.00 0.00 0.00 0.00 00:15:36.105 00:15:37.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.168 Nvme0n1 : 10.00 23241.30 90.79 0.00 0.00 0.00 0.00 0.00 00:15:37.168 =================================================================================================================== 00:15:37.168 Total : 23241.30 90.79 0.00 0.00 0.00 0.00 0.00 00:15:37.168 00:15:37.168 00:15:37.168 Latency(us) 00:15:37.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.168 Nvme0n1 : 10.00 23244.69 90.80 0.00 0.00 5503.37 2379.24 20401.64 00:15:37.168 =================================================================================================================== 00:15:37.168 Total : 23244.69 90.80 0.00 0.00 5503.37 2379.24 20401.64 00:15:37.168 0 00:15:37.168 13:57:27 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3229861 00:15:37.168 13:57:27 -- common/autotest_common.sh@926 -- # '[' -z 3229861 ']' 00:15:37.168 13:57:27 -- common/autotest_common.sh@930 -- # kill -0 3229861 00:15:37.168 13:57:27 -- common/autotest_common.sh@931 -- # uname 00:15:37.168 13:57:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:37.168 13:57:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3229861 00:15:37.168 13:57:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:37.168 13:57:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:37.168 13:57:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3229861' 00:15:37.168 killing process with pid 3229861 00:15:37.168 13:57:27 -- common/autotest_common.sh@945 -- # kill 3229861 00:15:37.168 Received shutdown signal, test time was about 10.000000 seconds 00:15:37.168 00:15:37.168 Latency(us) 00:15:37.168 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:37.168 =================================================================================================================== 00:15:37.168 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:37.168 13:57:27 -- common/autotest_common.sh@950 -- # wait 3229861 00:15:37.168 13:57:28 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:37.427 13:57:28 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:37.427 13:57:28 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:37.427 13:57:28 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:37.427 13:57:28 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:15:37.427 13:57:28 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:37.685 [2024-07-23 13:57:28.561319] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:37.685 13:57:28 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:37.685 13:57:28 -- common/autotest_common.sh@640 -- # local es=0 00:15:37.685 13:57:28 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:37.685 13:57:28 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.685 13:57:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.685 13:57:28 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.685 13:57:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.685 13:57:28 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.685 13:57:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:37.685 13:57:28 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:37.685 13:57:28 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:37.685 13:57:28 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:37.944 request: 00:15:37.944 { 00:15:37.944 "uuid": "78792fca-70ef-4c1f-80b1-8b0e28db0f38", 00:15:37.944 "method": "bdev_lvol_get_lvstores", 00:15:37.944 "req_id": 1 00:15:37.944 } 00:15:37.944 Got JSON-RPC error response 00:15:37.944 response: 00:15:37.944 { 00:15:37.944 "code": -19, 00:15:37.944 "message": "No such device" 00:15:37.944 } 00:15:37.944 13:57:28 -- common/autotest_common.sh@643 -- # es=1 00:15:37.944 13:57:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:37.944 13:57:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:37.944 13:57:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:37.944 13:57:28 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:37.944 aio_bdev 00:15:37.944 13:57:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c1d66297-bf73-48dc-b699-b3ac749ad5cb 00:15:37.944 13:57:28 -- common/autotest_common.sh@887 -- # local bdev_name=c1d66297-bf73-48dc-b699-b3ac749ad5cb 00:15:37.944 13:57:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:37.944 13:57:28 -- common/autotest_common.sh@889 -- # local i 00:15:37.944 13:57:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:37.944 13:57:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:37.944 13:57:28 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:38.203 13:57:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c1d66297-bf73-48dc-b699-b3ac749ad5cb -t 2000 00:15:38.461 [ 00:15:38.461 { 00:15:38.461 "name": "c1d66297-bf73-48dc-b699-b3ac749ad5cb", 00:15:38.461 "aliases": [ 00:15:38.461 "lvs/lvol" 00:15:38.461 ], 00:15:38.461 "product_name": "Logical Volume", 00:15:38.461 "block_size": 4096, 00:15:38.461 "num_blocks": 38912, 00:15:38.461 "uuid": "c1d66297-bf73-48dc-b699-b3ac749ad5cb", 00:15:38.461 "assigned_rate_limits": { 00:15:38.461 "rw_ios_per_sec": 0, 00:15:38.461 "rw_mbytes_per_sec": 0, 00:15:38.461 "r_mbytes_per_sec": 0, 00:15:38.461 "w_mbytes_per_sec": 0 00:15:38.461 }, 00:15:38.461 "claimed": false, 00:15:38.461 "zoned": false, 00:15:38.461 "supported_io_types": { 00:15:38.461 "read": true, 00:15:38.461 "write": true, 00:15:38.461 "unmap": true, 00:15:38.462 "write_zeroes": true, 00:15:38.462 "flush": false, 00:15:38.462 "reset": true, 00:15:38.462 "compare": false, 00:15:38.462 "compare_and_write": false, 00:15:38.462 "abort": false, 00:15:38.462 "nvme_admin": false, 00:15:38.462 "nvme_io": false 00:15:38.462 }, 00:15:38.462 "driver_specific": { 00:15:38.462 "lvol": { 00:15:38.462 "lvol_store_uuid": "78792fca-70ef-4c1f-80b1-8b0e28db0f38", 00:15:38.462 "base_bdev": "aio_bdev", 00:15:38.462 "thin_provision": false, 00:15:38.462 "snapshot": false, 00:15:38.462 "clone": false, 00:15:38.462 "esnap_clone": false 00:15:38.462 } 00:15:38.462 } 00:15:38.462 } 00:15:38.462 ] 00:15:38.462 13:57:29 -- common/autotest_common.sh@895 -- # return 0 00:15:38.462 13:57:29 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:38.462 13:57:29 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:38.462 13:57:29 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:38.462 13:57:29 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:38.462 13:57:29 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:38.720 13:57:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:38.720 13:57:29 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c1d66297-bf73-48dc-b699-b3ac749ad5cb 00:15:38.978 13:57:29 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78792fca-70ef-4c1f-80b1-8b0e28db0f38 00:15:38.978 13:57:29 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.236 00:15:39.236 real 0m15.484s 00:15:39.236 user 0m15.106s 00:15:39.236 sys 0m1.419s 00:15:39.236 13:57:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.236 13:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:39.236 ************************************ 00:15:39.236 END TEST lvs_grow_clean 00:15:39.236 ************************************ 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:39.236 13:57:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:39.236 13:57:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:39.236 13:57:30 -- common/autotest_common.sh@10 -- # set +x 00:15:39.236 ************************************ 00:15:39.236 START TEST lvs_grow_dirty 00:15:39.236 ************************************ 00:15:39.236 13:57:30 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:39.236 13:57:30 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:39.494 13:57:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:39.494 13:57:30 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:39.760 13:57:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:39.760 13:57:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:39.760 13:57:30 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:39.760 13:57:30 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:39.760 13:57:30 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:39.760 13:57:30 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5f254bf1-a7fc-44b7-882a-d068b392df53 lvol 150 00:15:40.017 13:57:30 -- target/nvmf_lvs_grow.sh@33 -- # lvol=29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:40.017 13:57:30 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:40.017 13:57:30 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:40.017 [2024-07-23 13:57:31.016445] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:40.017 [2024-07-23 13:57:31.016497] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:40.017 true 00:15:40.017 13:57:31 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:40.017 13:57:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:40.275 13:57:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:40.275 13:57:31 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:40.533 13:57:31 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:40.533 13:57:31 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:40.791 13:57:31 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:41.049 13:57:31 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:41.049 13:57:31 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3232493 00:15:41.049 13:57:31 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:41.049 13:57:31 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3232493 /var/tmp/bdevperf.sock 00:15:41.049 13:57:31 -- common/autotest_common.sh@819 -- # '[' -z 3232493 ']' 00:15:41.049 13:57:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:41.049 13:57:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:41.049 13:57:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:41.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:41.049 13:57:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:41.049 13:57:31 -- common/autotest_common.sh@10 -- # set +x 00:15:41.049 [2024-07-23 13:57:31.856147] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:41.049 [2024-07-23 13:57:31.856198] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3232493 ] 00:15:41.049 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.049 [2024-07-23 13:57:31.908509] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.049 [2024-07-23 13:57:31.978037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.982 13:57:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:41.982 13:57:32 -- common/autotest_common.sh@852 -- # return 0 00:15:41.982 13:57:32 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:41.982 Nvme0n1 00:15:41.982 13:57:32 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:42.240 [ 00:15:42.240 { 00:15:42.240 "name": "Nvme0n1", 00:15:42.240 "aliases": [ 00:15:42.240 "29ad23e0-ff90-49e6-8d3f-1c2230703680" 00:15:42.240 ], 00:15:42.240 "product_name": "NVMe disk", 00:15:42.240 "block_size": 4096, 00:15:42.240 "num_blocks": 38912, 00:15:42.240 "uuid": "29ad23e0-ff90-49e6-8d3f-1c2230703680", 00:15:42.240 "assigned_rate_limits": { 00:15:42.240 "rw_ios_per_sec": 0, 00:15:42.240 "rw_mbytes_per_sec": 0, 00:15:42.240 "r_mbytes_per_sec": 0, 00:15:42.240 "w_mbytes_per_sec": 0 00:15:42.240 }, 00:15:42.240 "claimed": false, 00:15:42.240 "zoned": false, 00:15:42.240 "supported_io_types": { 00:15:42.240 "read": true, 00:15:42.240 "write": true, 00:15:42.240 "unmap": true, 00:15:42.240 "write_zeroes": true, 00:15:42.240 "flush": true, 00:15:42.240 "reset": true, 00:15:42.240 "compare": true, 00:15:42.240 "compare_and_write": true, 00:15:42.240 "abort": true, 00:15:42.240 "nvme_admin": true, 00:15:42.240 "nvme_io": true 00:15:42.240 }, 00:15:42.240 "driver_specific": { 00:15:42.240 "nvme": [ 00:15:42.240 { 00:15:42.240 "trid": { 00:15:42.240 "trtype": "TCP", 00:15:42.240 "adrfam": "IPv4", 00:15:42.240 "traddr": "10.0.0.2", 00:15:42.240 "trsvcid": "4420", 00:15:42.240 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:42.240 }, 00:15:42.240 "ctrlr_data": { 00:15:42.240 "cntlid": 1, 00:15:42.240 "vendor_id": "0x8086", 00:15:42.240 "model_number": "SPDK bdev Controller", 00:15:42.240 "serial_number": "SPDK0", 00:15:42.240 "firmware_revision": "24.01.1", 00:15:42.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:42.240 "oacs": { 00:15:42.240 "security": 0, 00:15:42.240 "format": 0, 00:15:42.240 "firmware": 0, 00:15:42.240 "ns_manage": 0 00:15:42.240 }, 00:15:42.240 "multi_ctrlr": true, 00:15:42.240 "ana_reporting": false 00:15:42.240 }, 00:15:42.240 "vs": { 00:15:42.240 "nvme_version": "1.3" 00:15:42.240 }, 00:15:42.240 "ns_data": { 00:15:42.240 "id": 1, 00:15:42.240 "can_share": true 00:15:42.240 } 00:15:42.240 } 00:15:42.240 ], 00:15:42.240 "mp_policy": "active_passive" 00:15:42.240 } 00:15:42.240 } 00:15:42.240 ] 00:15:42.240 13:57:33 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:42.240 13:57:33 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3232731 00:15:42.240 13:57:33 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:42.240 Running I/O for 10 seconds... 00:15:43.174 Latency(us) 00:15:43.174 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:43.174 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.174 Nvme0n1 : 1.00 23251.00 90.82 0.00 0.00 0.00 0.00 0.00 00:15:43.174 =================================================================================================================== 00:15:43.174 Total : 23251.00 90.82 0.00 0.00 0.00 0.00 0.00 00:15:43.174 00:15:44.108 13:57:35 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:44.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.366 Nvme0n1 : 2.00 23320.50 91.10 0.00 0.00 0.00 0.00 0.00 00:15:44.366 =================================================================================================================== 00:15:44.366 Total : 23320.50 91.10 0.00 0.00 0.00 0.00 0.00 00:15:44.366 00:15:44.366 true 00:15:44.366 13:57:35 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:44.366 13:57:35 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:44.624 13:57:35 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:44.624 13:57:35 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:44.624 13:57:35 -- target/nvmf_lvs_grow.sh@65 -- # wait 3232731 00:15:45.191 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.191 Nvme0n1 : 3.00 23200.33 90.63 0.00 0.00 0.00 0.00 0.00 00:15:45.191 =================================================================================================================== 00:15:45.191 Total : 23200.33 90.63 0.00 0.00 0.00 0.00 0.00 00:15:45.191 00:15:46.566 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:46.566 Nvme0n1 : 4.00 23190.25 90.59 0.00 0.00 0.00 0.00 0.00 00:15:46.566 =================================================================================================================== 00:15:46.566 Total : 23190.25 90.59 0.00 0.00 0.00 0.00 0.00 00:15:46.566 00:15:47.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:47.500 Nvme0n1 : 5.00 23187.40 90.58 0.00 0.00 0.00 0.00 0.00 00:15:47.500 =================================================================================================================== 00:15:47.500 Total : 23187.40 90.58 0.00 0.00 0.00 0.00 0.00 00:15:47.500 00:15:48.434 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:48.435 Nvme0n1 : 6.00 23198.83 90.62 0.00 0.00 0.00 0.00 0.00 00:15:48.435 =================================================================================================================== 00:15:48.435 Total : 23198.83 90.62 0.00 0.00 0.00 0.00 0.00 00:15:48.435 00:15:49.370 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:49.370 Nvme0n1 : 7.00 23241.71 90.79 0.00 0.00 0.00 0.00 0.00 00:15:49.370 =================================================================================================================== 00:15:49.370 Total : 23241.71 90.79 0.00 0.00 0.00 0.00 0.00 00:15:49.370 00:15:50.304 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:50.304 Nvme0n1 : 8.00 23290.25 90.98 0.00 0.00 0.00 0.00 0.00 00:15:50.304 =================================================================================================================== 00:15:50.304 Total : 23290.25 90.98 0.00 0.00 0.00 0.00 0.00 00:15:50.304 00:15:51.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.239 Nvme0n1 : 9.00 23336.00 91.16 0.00 0.00 0.00 0.00 0.00 00:15:51.239 =================================================================================================================== 00:15:51.239 Total : 23336.00 91.16 0.00 0.00 0.00 0.00 0.00 00:15:51.239 00:15:52.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.615 Nvme0n1 : 10.00 23378.70 91.32 0.00 0.00 0.00 0.00 0.00 00:15:52.615 =================================================================================================================== 00:15:52.615 Total : 23378.70 91.32 0.00 0.00 0.00 0.00 0.00 00:15:52.615 00:15:52.615 00:15:52.615 Latency(us) 00:15:52.615 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.615 Nvme0n1 : 10.01 23378.56 91.32 0.00 0.00 5471.16 2649.93 27582.11 00:15:52.615 =================================================================================================================== 00:15:52.615 Total : 23378.56 91.32 0.00 0.00 5471.16 2649.93 27582.11 00:15:52.615 0 00:15:52.615 13:57:43 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3232493 00:15:52.615 13:57:43 -- common/autotest_common.sh@926 -- # '[' -z 3232493 ']' 00:15:52.615 13:57:43 -- common/autotest_common.sh@930 -- # kill -0 3232493 00:15:52.615 13:57:43 -- common/autotest_common.sh@931 -- # uname 00:15:52.615 13:57:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:52.616 13:57:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3232493 00:15:52.616 13:57:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:52.616 13:57:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:52.616 13:57:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3232493' 00:15:52.616 killing process with pid 3232493 00:15:52.616 13:57:43 -- common/autotest_common.sh@945 -- # kill 3232493 00:15:52.616 Received shutdown signal, test time was about 10.000000 seconds 00:15:52.616 00:15:52.616 Latency(us) 00:15:52.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.616 =================================================================================================================== 00:15:52.616 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:52.616 13:57:43 -- common/autotest_common.sh@950 -- # wait 3232493 00:15:52.616 13:57:43 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:52.874 13:57:43 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:52.874 13:57:43 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:15:52.874 13:57:43 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:15:52.874 13:57:43 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:15:52.874 13:57:43 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 3229354 00:15:52.874 13:57:43 -- target/nvmf_lvs_grow.sh@74 -- # wait 3229354 00:15:53.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 3229354 Killed "${NVMF_APP[@]}" "$@" 00:15:53.164 13:57:43 -- target/nvmf_lvs_grow.sh@74 -- # true 00:15:53.164 13:57:43 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:15:53.164 13:57:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:53.164 13:57:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:53.164 13:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:53.164 13:57:43 -- nvmf/common.sh@469 -- # nvmfpid=3234603 00:15:53.164 13:57:43 -- nvmf/common.sh@470 -- # waitforlisten 3234603 00:15:53.164 13:57:43 -- common/autotest_common.sh@819 -- # '[' -z 3234603 ']' 00:15:53.164 13:57:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:53.164 13:57:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:53.164 13:57:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:53.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:53.164 13:57:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:53.164 13:57:43 -- common/autotest_common.sh@10 -- # set +x 00:15:53.164 13:57:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:53.164 [2024-07-23 13:57:43.949817] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:53.164 [2024-07-23 13:57:43.949862] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:53.164 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.164 [2024-07-23 13:57:44.007762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:53.164 [2024-07-23 13:57:44.083886] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.164 [2024-07-23 13:57:44.083989] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.164 [2024-07-23 13:57:44.083998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.164 [2024-07-23 13:57:44.084004] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.164 [2024-07-23 13:57:44.084019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.738 13:57:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.738 13:57:44 -- common/autotest_common.sh@852 -- # return 0 00:15:53.738 13:57:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.738 13:57:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:53.738 13:57:44 -- common/autotest_common.sh@10 -- # set +x 00:15:53.997 13:57:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.997 13:57:44 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:53.997 [2024-07-23 13:57:44.932686] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:53.997 [2024-07-23 13:57:44.932776] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:53.997 [2024-07-23 13:57:44.932802] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:53.997 13:57:44 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:15:53.997 13:57:44 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:53.997 13:57:44 -- common/autotest_common.sh@887 -- # local bdev_name=29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:53.997 13:57:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:53.997 13:57:44 -- common/autotest_common.sh@889 -- # local i 00:15:53.997 13:57:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:53.997 13:57:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:53.997 13:57:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:54.256 13:57:45 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29ad23e0-ff90-49e6-8d3f-1c2230703680 -t 2000 00:15:54.256 [ 00:15:54.256 { 00:15:54.256 "name": "29ad23e0-ff90-49e6-8d3f-1c2230703680", 00:15:54.256 "aliases": [ 00:15:54.256 "lvs/lvol" 00:15:54.256 ], 00:15:54.256 "product_name": "Logical Volume", 00:15:54.256 "block_size": 4096, 00:15:54.256 "num_blocks": 38912, 00:15:54.256 "uuid": "29ad23e0-ff90-49e6-8d3f-1c2230703680", 00:15:54.256 "assigned_rate_limits": { 00:15:54.256 "rw_ios_per_sec": 0, 00:15:54.256 "rw_mbytes_per_sec": 0, 00:15:54.256 "r_mbytes_per_sec": 0, 00:15:54.256 "w_mbytes_per_sec": 0 00:15:54.256 }, 00:15:54.256 "claimed": false, 00:15:54.256 "zoned": false, 00:15:54.256 "supported_io_types": { 00:15:54.256 "read": true, 00:15:54.256 "write": true, 00:15:54.256 "unmap": true, 00:15:54.256 "write_zeroes": true, 00:15:54.256 "flush": false, 00:15:54.256 "reset": true, 00:15:54.256 "compare": false, 00:15:54.256 "compare_and_write": false, 00:15:54.256 "abort": false, 00:15:54.256 "nvme_admin": false, 00:15:54.256 "nvme_io": false 00:15:54.256 }, 00:15:54.256 "driver_specific": { 00:15:54.256 "lvol": { 00:15:54.256 "lvol_store_uuid": "5f254bf1-a7fc-44b7-882a-d068b392df53", 00:15:54.256 "base_bdev": "aio_bdev", 00:15:54.256 "thin_provision": false, 00:15:54.256 "snapshot": false, 00:15:54.256 "clone": false, 00:15:54.256 "esnap_clone": false 00:15:54.256 } 00:15:54.256 } 00:15:54.256 } 00:15:54.256 ] 00:15:54.515 13:57:45 -- common/autotest_common.sh@895 -- # return 0 00:15:54.515 13:57:45 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:54.515 13:57:45 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:15:54.515 13:57:45 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:15:54.515 13:57:45 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:54.515 13:57:45 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:15:54.774 13:57:45 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:15:54.774 13:57:45 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:54.774 [2024-07-23 13:57:45.769258] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:55.033 13:57:45 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:55.033 13:57:45 -- common/autotest_common.sh@640 -- # local es=0 00:15:55.033 13:57:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:55.033 13:57:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.033 13:57:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:55.033 13:57:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.033 13:57:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:55.033 13:57:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.033 13:57:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:55.033 13:57:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:55.033 13:57:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:55.033 13:57:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:55.033 request: 00:15:55.033 { 00:15:55.033 "uuid": "5f254bf1-a7fc-44b7-882a-d068b392df53", 00:15:55.033 "method": "bdev_lvol_get_lvstores", 00:15:55.033 "req_id": 1 00:15:55.033 } 00:15:55.033 Got JSON-RPC error response 00:15:55.033 response: 00:15:55.033 { 00:15:55.033 "code": -19, 00:15:55.033 "message": "No such device" 00:15:55.033 } 00:15:55.033 13:57:45 -- common/autotest_common.sh@643 -- # es=1 00:15:55.033 13:57:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:55.033 13:57:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:55.033 13:57:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:55.033 13:57:45 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:55.292 aio_bdev 00:15:55.292 13:57:46 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:55.292 13:57:46 -- common/autotest_common.sh@887 -- # local bdev_name=29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:55.292 13:57:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:15:55.292 13:57:46 -- common/autotest_common.sh@889 -- # local i 00:15:55.292 13:57:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:15:55.292 13:57:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:15:55.292 13:57:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:55.551 13:57:46 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 29ad23e0-ff90-49e6-8d3f-1c2230703680 -t 2000 00:15:55.551 [ 00:15:55.551 { 00:15:55.551 "name": "29ad23e0-ff90-49e6-8d3f-1c2230703680", 00:15:55.551 "aliases": [ 00:15:55.551 "lvs/lvol" 00:15:55.551 ], 00:15:55.551 "product_name": "Logical Volume", 00:15:55.551 "block_size": 4096, 00:15:55.551 "num_blocks": 38912, 00:15:55.551 "uuid": "29ad23e0-ff90-49e6-8d3f-1c2230703680", 00:15:55.551 "assigned_rate_limits": { 00:15:55.551 "rw_ios_per_sec": 0, 00:15:55.551 "rw_mbytes_per_sec": 0, 00:15:55.551 "r_mbytes_per_sec": 0, 00:15:55.551 "w_mbytes_per_sec": 0 00:15:55.551 }, 00:15:55.551 "claimed": false, 00:15:55.551 "zoned": false, 00:15:55.551 "supported_io_types": { 00:15:55.551 "read": true, 00:15:55.551 "write": true, 00:15:55.551 "unmap": true, 00:15:55.551 "write_zeroes": true, 00:15:55.551 "flush": false, 00:15:55.551 "reset": true, 00:15:55.551 "compare": false, 00:15:55.551 "compare_and_write": false, 00:15:55.551 "abort": false, 00:15:55.551 "nvme_admin": false, 00:15:55.551 "nvme_io": false 00:15:55.551 }, 00:15:55.551 "driver_specific": { 00:15:55.551 "lvol": { 00:15:55.551 "lvol_store_uuid": "5f254bf1-a7fc-44b7-882a-d068b392df53", 00:15:55.551 "base_bdev": "aio_bdev", 00:15:55.551 "thin_provision": false, 00:15:55.551 "snapshot": false, 00:15:55.551 "clone": false, 00:15:55.551 "esnap_clone": false 00:15:55.551 } 00:15:55.551 } 00:15:55.551 } 00:15:55.551 ] 00:15:55.551 13:57:46 -- common/autotest_common.sh@895 -- # return 0 00:15:55.551 13:57:46 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:55.551 13:57:46 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:15:55.809 13:57:46 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:15:55.809 13:57:46 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:55.809 13:57:46 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:15:55.810 13:57:46 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:15:55.810 13:57:46 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 29ad23e0-ff90-49e6-8d3f-1c2230703680 00:15:56.069 13:57:46 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5f254bf1-a7fc-44b7-882a-d068b392df53 00:15:56.327 13:57:47 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:56.327 13:57:47 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:56.586 00:15:56.586 real 0m17.172s 00:15:56.586 user 0m43.695s 00:15:56.586 sys 0m4.030s 00:15:56.586 13:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:56.586 13:57:47 -- common/autotest_common.sh@10 -- # set +x 00:15:56.586 ************************************ 00:15:56.586 END TEST lvs_grow_dirty 00:15:56.586 ************************************ 00:15:56.586 13:57:47 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:56.586 13:57:47 -- common/autotest_common.sh@796 -- # type=--id 00:15:56.586 13:57:47 -- common/autotest_common.sh@797 -- # id=0 00:15:56.586 13:57:47 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:15:56.586 13:57:47 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:56.586 13:57:47 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:15:56.586 13:57:47 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:15:56.586 13:57:47 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:15:56.586 13:57:47 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:56.586 nvmf_trace.0 00:15:56.586 13:57:47 -- common/autotest_common.sh@811 -- # return 0 00:15:56.586 13:57:47 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:56.586 13:57:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:56.586 13:57:47 -- nvmf/common.sh@116 -- # sync 00:15:56.586 13:57:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:56.586 13:57:47 -- nvmf/common.sh@119 -- # set +e 00:15:56.586 13:57:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:56.586 13:57:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:56.586 rmmod nvme_tcp 00:15:56.586 rmmod nvme_fabrics 00:15:56.586 rmmod nvme_keyring 00:15:56.586 13:57:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:56.586 13:57:47 -- nvmf/common.sh@123 -- # set -e 00:15:56.586 13:57:47 -- nvmf/common.sh@124 -- # return 0 00:15:56.586 13:57:47 -- nvmf/common.sh@477 -- # '[' -n 3234603 ']' 00:15:56.586 13:57:47 -- nvmf/common.sh@478 -- # killprocess 3234603 00:15:56.586 13:57:47 -- common/autotest_common.sh@926 -- # '[' -z 3234603 ']' 00:15:56.586 13:57:47 -- common/autotest_common.sh@930 -- # kill -0 3234603 00:15:56.586 13:57:47 -- common/autotest_common.sh@931 -- # uname 00:15:56.586 13:57:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:56.586 13:57:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3234603 00:15:56.586 13:57:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:56.586 13:57:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:56.586 13:57:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3234603' 00:15:56.586 killing process with pid 3234603 00:15:56.586 13:57:47 -- common/autotest_common.sh@945 -- # kill 3234603 00:15:56.586 13:57:47 -- common/autotest_common.sh@950 -- # wait 3234603 00:15:56.845 13:57:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:56.845 13:57:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:56.845 13:57:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:56.845 13:57:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:56.845 13:57:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:56.845 13:57:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.845 13:57:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:56.845 13:57:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.382 13:57:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:59.382 00:15:59.382 real 0m42.025s 00:15:59.382 user 1m4.465s 00:15:59.382 sys 0m10.207s 00:15:59.382 13:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:59.382 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:59.382 ************************************ 00:15:59.382 END TEST nvmf_lvs_grow 00:15:59.382 ************************************ 00:15:59.382 13:57:49 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:59.382 13:57:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:59.382 13:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:59.382 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:15:59.382 ************************************ 00:15:59.382 START TEST nvmf_bdev_io_wait 00:15:59.382 ************************************ 00:15:59.382 13:57:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:59.382 * Looking for test storage... 00:15:59.382 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:59.382 13:57:49 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:59.382 13:57:49 -- nvmf/common.sh@7 -- # uname -s 00:15:59.382 13:57:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:59.382 13:57:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:59.382 13:57:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:59.382 13:57:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:59.382 13:57:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:59.382 13:57:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:59.382 13:57:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:59.382 13:57:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:59.382 13:57:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:59.382 13:57:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:59.382 13:57:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.382 13:57:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.382 13:57:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:59.382 13:57:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:59.382 13:57:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:59.382 13:57:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:59.382 13:57:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:59.382 13:57:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:59.382 13:57:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:59.382 13:57:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.382 13:57:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.382 13:57:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.382 13:57:49 -- paths/export.sh@5 -- # export PATH 00:15:59.382 13:57:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:59.382 13:57:49 -- nvmf/common.sh@46 -- # : 0 00:15:59.382 13:57:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:59.382 13:57:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:59.382 13:57:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:59.382 13:57:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:59.382 13:57:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:59.382 13:57:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:59.382 13:57:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:59.382 13:57:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:59.382 13:57:49 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:59.382 13:57:49 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:59.382 13:57:49 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:59.382 13:57:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:59.382 13:57:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:59.382 13:57:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:59.382 13:57:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:59.382 13:57:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:59.382 13:57:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:59.382 13:57:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:59.382 13:57:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:59.382 13:57:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:59.382 13:57:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:59.382 13:57:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:59.382 13:57:49 -- common/autotest_common.sh@10 -- # set +x 00:16:04.655 13:57:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:04.655 13:57:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:04.655 13:57:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:04.655 13:57:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:04.655 13:57:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:04.655 13:57:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:04.655 13:57:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:04.655 13:57:54 -- nvmf/common.sh@294 -- # net_devs=() 00:16:04.655 13:57:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:04.655 13:57:54 -- nvmf/common.sh@295 -- # e810=() 00:16:04.655 13:57:54 -- nvmf/common.sh@295 -- # local -ga e810 00:16:04.655 13:57:54 -- nvmf/common.sh@296 -- # x722=() 00:16:04.655 13:57:54 -- nvmf/common.sh@296 -- # local -ga x722 00:16:04.655 13:57:54 -- nvmf/common.sh@297 -- # mlx=() 00:16:04.655 13:57:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:04.655 13:57:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:04.655 13:57:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:04.655 13:57:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:04.655 13:57:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:04.655 13:57:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:04.655 13:57:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:04.655 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:04.655 13:57:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:04.655 13:57:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:04.655 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:04.655 13:57:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:04.655 13:57:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:04.655 13:57:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.655 13:57:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:04.655 13:57:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.655 13:57:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:04.655 Found net devices under 0000:86:00.0: cvl_0_0 00:16:04.655 13:57:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.655 13:57:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:04.655 13:57:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:04.655 13:57:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:04.655 13:57:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:04.655 13:57:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:04.655 Found net devices under 0000:86:00.1: cvl_0_1 00:16:04.655 13:57:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:04.655 13:57:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:04.655 13:57:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:04.655 13:57:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:04.655 13:57:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:04.655 13:57:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:04.655 13:57:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:04.655 13:57:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:04.655 13:57:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:04.655 13:57:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:04.655 13:57:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:04.655 13:57:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:04.655 13:57:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:04.655 13:57:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:04.655 13:57:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:04.655 13:57:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:04.655 13:57:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:04.655 13:57:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:04.655 13:57:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:04.655 13:57:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:04.655 13:57:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:04.655 13:57:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:04.655 13:57:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:04.655 13:57:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:04.655 13:57:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:04.655 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:04.655 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:16:04.655 00:16:04.655 --- 10.0.0.2 ping statistics --- 00:16:04.655 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.655 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:04.655 13:57:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:04.655 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:04.655 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:16:04.655 00:16:04.655 --- 10.0.0.1 ping statistics --- 00:16:04.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:04.656 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:04.656 13:57:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:04.656 13:57:55 -- nvmf/common.sh@410 -- # return 0 00:16:04.656 13:57:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:04.656 13:57:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:04.656 13:57:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:04.656 13:57:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:04.656 13:57:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:04.656 13:57:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:04.656 13:57:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:04.656 13:57:55 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:04.656 13:57:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:04.656 13:57:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:04.656 13:57:55 -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 13:57:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:04.656 13:57:55 -- nvmf/common.sh@469 -- # nvmfpid=3238675 00:16:04.656 13:57:55 -- nvmf/common.sh@470 -- # waitforlisten 3238675 00:16:04.656 13:57:55 -- common/autotest_common.sh@819 -- # '[' -z 3238675 ']' 00:16:04.656 13:57:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:04.656 13:57:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:04.656 13:57:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:04.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:04.656 13:57:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:04.656 13:57:55 -- common/autotest_common.sh@10 -- # set +x 00:16:04.656 [2024-07-23 13:57:55.215522] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:04.656 [2024-07-23 13:57:55.215568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:04.656 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.656 [2024-07-23 13:57:55.275154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:04.656 [2024-07-23 13:57:55.348396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:04.656 [2024-07-23 13:57:55.348526] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:04.656 [2024-07-23 13:57:55.348534] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:04.656 [2024-07-23 13:57:55.348541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:04.656 [2024-07-23 13:57:55.348587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:04.656 [2024-07-23 13:57:55.348686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:04.656 [2024-07-23 13:57:55.348753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:04.656 [2024-07-23 13:57:55.348755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.223 13:57:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:05.223 13:57:56 -- common/autotest_common.sh@852 -- # return 0 00:16:05.223 13:57:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:05.223 13:57:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 13:57:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 [2024-07-23 13:57:56.126455] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 Malloc0 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:05.223 13:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:05.223 13:57:56 -- common/autotest_common.sh@10 -- # set +x 00:16:05.223 [2024-07-23 13:57:56.181445] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:05.223 13:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3238837 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@30 -- # READ_PID=3238841 00:16:05.223 13:57:56 -- nvmf/common.sh@520 -- # config=() 00:16:05.223 13:57:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:05.223 13:57:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:05.223 13:57:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:05.223 { 00:16:05.223 "params": { 00:16:05.223 "name": "Nvme$subsystem", 00:16:05.223 "trtype": "$TEST_TRANSPORT", 00:16:05.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.223 "adrfam": "ipv4", 00:16:05.223 "trsvcid": "$NVMF_PORT", 00:16:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.223 "hdgst": ${hdgst:-false}, 00:16:05.223 "ddgst": ${ddgst:-false} 00:16:05.223 }, 00:16:05.223 "method": "bdev_nvme_attach_controller" 00:16:05.223 } 00:16:05.223 EOF 00:16:05.223 )") 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3238843 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:05.223 13:57:56 -- nvmf/common.sh@520 -- # config=() 00:16:05.223 13:57:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:05.223 13:57:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:05.223 13:57:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:05.223 { 00:16:05.223 "params": { 00:16:05.223 "name": "Nvme$subsystem", 00:16:05.223 "trtype": "$TEST_TRANSPORT", 00:16:05.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.223 "adrfam": "ipv4", 00:16:05.223 "trsvcid": "$NVMF_PORT", 00:16:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.223 "hdgst": ${hdgst:-false}, 00:16:05.223 "ddgst": ${ddgst:-false} 00:16:05.223 }, 00:16:05.223 "method": "bdev_nvme_attach_controller" 00:16:05.223 } 00:16:05.223 EOF 00:16:05.223 )") 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3238848 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@35 -- # sync 00:16:05.223 13:57:56 -- nvmf/common.sh@542 -- # cat 00:16:05.223 13:57:56 -- nvmf/common.sh@520 -- # config=() 00:16:05.223 13:57:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:05.223 13:57:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:05.223 13:57:56 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:05.223 13:57:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:05.223 { 00:16:05.223 "params": { 00:16:05.223 "name": "Nvme$subsystem", 00:16:05.223 "trtype": "$TEST_TRANSPORT", 00:16:05.223 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.223 "adrfam": "ipv4", 00:16:05.223 "trsvcid": "$NVMF_PORT", 00:16:05.223 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.223 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.223 "hdgst": ${hdgst:-false}, 00:16:05.223 "ddgst": ${ddgst:-false} 00:16:05.223 }, 00:16:05.224 "method": "bdev_nvme_attach_controller" 00:16:05.224 } 00:16:05.224 EOF 00:16:05.224 )") 00:16:05.224 13:57:56 -- nvmf/common.sh@520 -- # config=() 00:16:05.224 13:57:56 -- nvmf/common.sh@542 -- # cat 00:16:05.224 13:57:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:05.224 13:57:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:05.224 13:57:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:05.224 { 00:16:05.224 "params": { 00:16:05.224 "name": "Nvme$subsystem", 00:16:05.224 "trtype": "$TEST_TRANSPORT", 00:16:05.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:05.224 "adrfam": "ipv4", 00:16:05.224 "trsvcid": "$NVMF_PORT", 00:16:05.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:05.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:05.224 "hdgst": ${hdgst:-false}, 00:16:05.224 "ddgst": ${ddgst:-false} 00:16:05.224 }, 00:16:05.224 "method": "bdev_nvme_attach_controller" 00:16:05.224 } 00:16:05.224 EOF 00:16:05.224 )") 00:16:05.224 13:57:56 -- nvmf/common.sh@542 -- # cat 00:16:05.224 13:57:56 -- target/bdev_io_wait.sh@37 -- # wait 3238837 00:16:05.224 13:57:56 -- nvmf/common.sh@542 -- # cat 00:16:05.224 13:57:56 -- nvmf/common.sh@544 -- # jq . 00:16:05.224 13:57:56 -- nvmf/common.sh@544 -- # jq . 00:16:05.224 13:57:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:05.224 13:57:56 -- nvmf/common.sh@544 -- # jq . 00:16:05.224 13:57:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:05.224 "params": { 00:16:05.224 "name": "Nvme1", 00:16:05.224 "trtype": "tcp", 00:16:05.224 "traddr": "10.0.0.2", 00:16:05.224 "adrfam": "ipv4", 00:16:05.224 "trsvcid": "4420", 00:16:05.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.224 "hdgst": false, 00:16:05.224 "ddgst": false 00:16:05.224 }, 00:16:05.224 "method": "bdev_nvme_attach_controller" 00:16:05.224 }' 00:16:05.224 13:57:56 -- nvmf/common.sh@544 -- # jq . 00:16:05.224 13:57:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:05.224 13:57:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:05.224 "params": { 00:16:05.224 "name": "Nvme1", 00:16:05.224 "trtype": "tcp", 00:16:05.224 "traddr": "10.0.0.2", 00:16:05.224 "adrfam": "ipv4", 00:16:05.224 "trsvcid": "4420", 00:16:05.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.224 "hdgst": false, 00:16:05.224 "ddgst": false 00:16:05.224 }, 00:16:05.224 "method": "bdev_nvme_attach_controller" 00:16:05.224 }' 00:16:05.224 13:57:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:05.224 13:57:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:05.224 "params": { 00:16:05.224 "name": "Nvme1", 00:16:05.224 "trtype": "tcp", 00:16:05.224 "traddr": "10.0.0.2", 00:16:05.224 "adrfam": "ipv4", 00:16:05.224 "trsvcid": "4420", 00:16:05.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.224 "hdgst": false, 00:16:05.224 "ddgst": false 00:16:05.224 }, 00:16:05.224 "method": "bdev_nvme_attach_controller" 00:16:05.224 }' 00:16:05.224 13:57:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:05.224 13:57:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:05.224 "params": { 00:16:05.224 "name": "Nvme1", 00:16:05.224 "trtype": "tcp", 00:16:05.224 "traddr": "10.0.0.2", 00:16:05.224 "adrfam": "ipv4", 00:16:05.224 "trsvcid": "4420", 00:16:05.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:05.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:05.224 "hdgst": false, 00:16:05.224 "ddgst": false 00:16:05.224 }, 00:16:05.224 "method": "bdev_nvme_attach_controller" 00:16:05.224 }' 00:16:05.224 [2024-07-23 13:57:56.226502] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:05.224 [2024-07-23 13:57:56.226556] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:05.224 [2024-07-23 13:57:56.229425] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:05.224 [2024-07-23 13:57:56.229470] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:05.224 [2024-07-23 13:57:56.229510] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:05.224 [2024-07-23 13:57:56.229549] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:05.224 [2024-07-23 13:57:56.231431] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:05.224 [2024-07-23 13:57:56.231477] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:05.483 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.483 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.483 [2024-07-23 13:57:56.411081] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.483 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.483 [2024-07-23 13:57:56.467138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.483 [2024-07-23 13:57:56.492792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:05.742 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.742 [2024-07-23 13:57:56.536867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:05.742 [2024-07-23 13:57:56.564400] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.742 [2024-07-23 13:57:56.609362] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.742 [2024-07-23 13:57:56.651410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:05.742 [2024-07-23 13:57:56.684922] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:05.742 Running I/O for 1 seconds... 00:16:06.001 Running I/O for 1 seconds... 00:16:06.001 Running I/O for 1 seconds... 00:16:06.001 Running I/O for 1 seconds... 00:16:06.937 00:16:06.937 Latency(us) 00:16:06.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.937 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:06.937 Nvme1n1 : 1.00 15514.62 60.60 0.00 0.00 8233.76 3048.85 15956.59 00:16:06.937 =================================================================================================================== 00:16:06.937 Total : 15514.62 60.60 0.00 0.00 8233.76 3048.85 15956.59 00:16:06.937 00:16:06.937 Latency(us) 00:16:06.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.937 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:06.937 Nvme1n1 : 1.01 6470.41 25.28 0.00 0.00 19659.76 6895.53 22681.15 00:16:06.937 =================================================================================================================== 00:16:06.937 Total : 6470.41 25.28 0.00 0.00 19659.76 6895.53 22681.15 00:16:06.937 00:16:06.937 Latency(us) 00:16:06.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.938 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:06.938 Nvme1n1 : 1.00 250592.85 978.88 0.00 0.00 509.09 203.91 651.80 00:16:06.938 =================================================================================================================== 00:16:06.938 Total : 250592.85 978.88 0.00 0.00 509.09 203.91 651.80 00:16:06.938 00:16:06.938 Latency(us) 00:16:06.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:06.938 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:06.938 Nvme1n1 : 1.01 6462.67 25.24 0.00 0.00 19757.54 5214.39 38751.72 00:16:06.938 =================================================================================================================== 00:16:06.938 Total : 6462.67 25.24 0.00 0.00 19757.54 5214.39 38751.72 00:16:07.196 13:57:58 -- target/bdev_io_wait.sh@38 -- # wait 3238841 00:16:07.196 13:57:58 -- target/bdev_io_wait.sh@39 -- # wait 3238843 00:16:07.196 13:57:58 -- target/bdev_io_wait.sh@40 -- # wait 3238848 00:16:07.196 13:57:58 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:07.196 13:57:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:07.196 13:57:58 -- common/autotest_common.sh@10 -- # set +x 00:16:07.196 13:57:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:07.196 13:57:58 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:07.196 13:57:58 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:07.196 13:57:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:07.196 13:57:58 -- nvmf/common.sh@116 -- # sync 00:16:07.196 13:57:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:07.196 13:57:58 -- nvmf/common.sh@119 -- # set +e 00:16:07.196 13:57:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:07.196 13:57:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:07.196 rmmod nvme_tcp 00:16:07.196 rmmod nvme_fabrics 00:16:07.454 rmmod nvme_keyring 00:16:07.454 13:57:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:07.454 13:57:58 -- nvmf/common.sh@123 -- # set -e 00:16:07.454 13:57:58 -- nvmf/common.sh@124 -- # return 0 00:16:07.454 13:57:58 -- nvmf/common.sh@477 -- # '[' -n 3238675 ']' 00:16:07.454 13:57:58 -- nvmf/common.sh@478 -- # killprocess 3238675 00:16:07.454 13:57:58 -- common/autotest_common.sh@926 -- # '[' -z 3238675 ']' 00:16:07.454 13:57:58 -- common/autotest_common.sh@930 -- # kill -0 3238675 00:16:07.454 13:57:58 -- common/autotest_common.sh@931 -- # uname 00:16:07.454 13:57:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:07.454 13:57:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3238675 00:16:07.454 13:57:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:07.454 13:57:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:07.454 13:57:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3238675' 00:16:07.454 killing process with pid 3238675 00:16:07.454 13:57:58 -- common/autotest_common.sh@945 -- # kill 3238675 00:16:07.454 13:57:58 -- common/autotest_common.sh@950 -- # wait 3238675 00:16:07.713 13:57:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:07.713 13:57:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:07.713 13:57:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:07.713 13:57:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.713 13:57:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:07.713 13:57:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.713 13:57:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.713 13:57:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.617 13:58:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:09.617 00:16:09.617 real 0m10.746s 00:16:09.617 user 0m19.585s 00:16:09.617 sys 0m5.517s 00:16:09.617 13:58:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:09.617 13:58:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.617 ************************************ 00:16:09.617 END TEST nvmf_bdev_io_wait 00:16:09.617 ************************************ 00:16:09.617 13:58:00 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:09.617 13:58:00 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:09.617 13:58:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:09.617 13:58:00 -- common/autotest_common.sh@10 -- # set +x 00:16:09.617 ************************************ 00:16:09.617 START TEST nvmf_queue_depth 00:16:09.617 ************************************ 00:16:09.617 13:58:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:09.876 * Looking for test storage... 00:16:09.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:09.876 13:58:00 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:09.876 13:58:00 -- nvmf/common.sh@7 -- # uname -s 00:16:09.876 13:58:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:09.876 13:58:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:09.876 13:58:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:09.876 13:58:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:09.876 13:58:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:09.876 13:58:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:09.876 13:58:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:09.876 13:58:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:09.876 13:58:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:09.876 13:58:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:09.876 13:58:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.876 13:58:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.876 13:58:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:09.876 13:58:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:09.876 13:58:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:09.876 13:58:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:09.876 13:58:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:09.876 13:58:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:09.876 13:58:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:09.876 13:58:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.876 13:58:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.876 13:58:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.876 13:58:00 -- paths/export.sh@5 -- # export PATH 00:16:09.876 13:58:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:09.876 13:58:00 -- nvmf/common.sh@46 -- # : 0 00:16:09.876 13:58:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:09.876 13:58:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:09.876 13:58:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:09.876 13:58:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:09.876 13:58:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:09.876 13:58:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:09.876 13:58:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:09.876 13:58:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:09.876 13:58:00 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:09.876 13:58:00 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:09.876 13:58:00 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:09.876 13:58:00 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:09.876 13:58:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:09.876 13:58:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:09.876 13:58:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:09.876 13:58:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:09.876 13:58:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:09.876 13:58:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:09.876 13:58:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:09.876 13:58:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:09.876 13:58:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:09.876 13:58:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:09.876 13:58:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:09.876 13:58:00 -- common/autotest_common.sh@10 -- # set +x 00:16:15.176 13:58:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:15.176 13:58:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:15.176 13:58:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:15.176 13:58:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:15.176 13:58:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:15.176 13:58:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:15.176 13:58:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:15.176 13:58:05 -- nvmf/common.sh@294 -- # net_devs=() 00:16:15.176 13:58:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:15.176 13:58:05 -- nvmf/common.sh@295 -- # e810=() 00:16:15.176 13:58:05 -- nvmf/common.sh@295 -- # local -ga e810 00:16:15.176 13:58:05 -- nvmf/common.sh@296 -- # x722=() 00:16:15.176 13:58:05 -- nvmf/common.sh@296 -- # local -ga x722 00:16:15.176 13:58:05 -- nvmf/common.sh@297 -- # mlx=() 00:16:15.176 13:58:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:15.176 13:58:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:15.176 13:58:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:15.176 13:58:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:15.176 13:58:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:15.176 13:58:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:15.176 13:58:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:15.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:15.176 13:58:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:15.176 13:58:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:15.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:15.176 13:58:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:15.176 13:58:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:15.176 13:58:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.176 13:58:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:15.176 13:58:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.176 13:58:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:15.176 Found net devices under 0000:86:00.0: cvl_0_0 00:16:15.176 13:58:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.176 13:58:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:15.176 13:58:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:15.176 13:58:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:15.176 13:58:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:15.176 13:58:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:15.176 Found net devices under 0000:86:00.1: cvl_0_1 00:16:15.176 13:58:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:15.176 13:58:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:15.176 13:58:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:15.176 13:58:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:15.176 13:58:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:15.176 13:58:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:15.176 13:58:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:15.176 13:58:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:15.176 13:58:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:15.176 13:58:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:15.176 13:58:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:15.176 13:58:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:15.176 13:58:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:15.176 13:58:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:15.176 13:58:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:15.176 13:58:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:15.176 13:58:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:15.176 13:58:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:15.176 13:58:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:15.176 13:58:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:15.176 13:58:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:15.176 13:58:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:15.176 13:58:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:15.176 13:58:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:15.176 13:58:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:15.176 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:15.176 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:16:15.176 00:16:15.176 --- 10.0.0.2 ping statistics --- 00:16:15.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.176 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:16:15.176 13:58:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:15.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:15.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:16:15.176 00:16:15.176 --- 10.0.0.1 ping statistics --- 00:16:15.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:15.176 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:16:15.176 13:58:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:15.176 13:58:06 -- nvmf/common.sh@410 -- # return 0 00:16:15.176 13:58:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:15.176 13:58:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:15.176 13:58:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:15.176 13:58:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:15.176 13:58:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:15.176 13:58:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:15.176 13:58:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:15.176 13:58:06 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:15.176 13:58:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:15.176 13:58:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:15.176 13:58:06 -- common/autotest_common.sh@10 -- # set +x 00:16:15.176 13:58:06 -- nvmf/common.sh@469 -- # nvmfpid=3242737 00:16:15.177 13:58:06 -- nvmf/common.sh@470 -- # waitforlisten 3242737 00:16:15.177 13:58:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:15.177 13:58:06 -- common/autotest_common.sh@819 -- # '[' -z 3242737 ']' 00:16:15.177 13:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:15.177 13:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:15.177 13:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:15.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:15.177 13:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:15.177 13:58:06 -- common/autotest_common.sh@10 -- # set +x 00:16:15.177 [2024-07-23 13:58:06.162808] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:15.177 [2024-07-23 13:58:06.162852] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:15.177 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.435 [2024-07-23 13:58:06.220382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.435 [2024-07-23 13:58:06.295732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:15.435 [2024-07-23 13:58:06.295843] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:15.435 [2024-07-23 13:58:06.295850] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:15.435 [2024-07-23 13:58:06.295857] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:15.435 [2024-07-23 13:58:06.295878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:16.001 13:58:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:16.001 13:58:06 -- common/autotest_common.sh@852 -- # return 0 00:16:16.001 13:58:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:16.001 13:58:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:16.001 13:58:06 -- common/autotest_common.sh@10 -- # set +x 00:16:16.001 13:58:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:16.001 13:58:06 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:16.001 13:58:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.001 13:58:06 -- common/autotest_common.sh@10 -- # set +x 00:16:16.001 [2024-07-23 13:58:06.993249] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:16.001 13:58:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.001 13:58:06 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:16.001 13:58:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.001 13:58:06 -- common/autotest_common.sh@10 -- # set +x 00:16:16.259 Malloc0 00:16:16.259 13:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.259 13:58:07 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:16.259 13:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.259 13:58:07 -- common/autotest_common.sh@10 -- # set +x 00:16:16.259 13:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.260 13:58:07 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:16.260 13:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.260 13:58:07 -- common/autotest_common.sh@10 -- # set +x 00:16:16.260 13:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.260 13:58:07 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:16.260 13:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:16.260 13:58:07 -- common/autotest_common.sh@10 -- # set +x 00:16:16.260 [2024-07-23 13:58:07.058252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:16.260 13:58:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:16.260 13:58:07 -- target/queue_depth.sh@30 -- # bdevperf_pid=3242768 00:16:16.260 13:58:07 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:16.260 13:58:07 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:16.260 13:58:07 -- target/queue_depth.sh@33 -- # waitforlisten 3242768 /var/tmp/bdevperf.sock 00:16:16.260 13:58:07 -- common/autotest_common.sh@819 -- # '[' -z 3242768 ']' 00:16:16.260 13:58:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:16.260 13:58:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.260 13:58:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:16.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:16.260 13:58:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.260 13:58:07 -- common/autotest_common.sh@10 -- # set +x 00:16:16.260 [2024-07-23 13:58:07.104104] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:16.260 [2024-07-23 13:58:07.104145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3242768 ] 00:16:16.260 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.260 [2024-07-23 13:58:07.158651] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.260 [2024-07-23 13:58:07.230136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.190 13:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.190 13:58:07 -- common/autotest_common.sh@852 -- # return 0 00:16:17.190 13:58:07 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:17.190 13:58:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.190 13:58:07 -- common/autotest_common.sh@10 -- # set +x 00:16:17.190 NVMe0n1 00:16:17.190 13:58:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.190 13:58:08 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:17.448 Running I/O for 10 seconds... 00:16:27.424 00:16:27.424 Latency(us) 00:16:27.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.424 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:27.424 Verification LBA range: start 0x0 length 0x4000 00:16:27.424 NVMe0n1 : 10.05 18070.98 70.59 0.00 0.00 56501.94 11511.54 58127.58 00:16:27.424 =================================================================================================================== 00:16:27.424 Total : 18070.98 70.59 0.00 0.00 56501.94 11511.54 58127.58 00:16:27.424 0 00:16:27.424 13:58:18 -- target/queue_depth.sh@39 -- # killprocess 3242768 00:16:27.424 13:58:18 -- common/autotest_common.sh@926 -- # '[' -z 3242768 ']' 00:16:27.424 13:58:18 -- common/autotest_common.sh@930 -- # kill -0 3242768 00:16:27.424 13:58:18 -- common/autotest_common.sh@931 -- # uname 00:16:27.424 13:58:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:27.424 13:58:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3242768 00:16:27.424 13:58:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:27.424 13:58:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:27.424 13:58:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3242768' 00:16:27.424 killing process with pid 3242768 00:16:27.424 13:58:18 -- common/autotest_common.sh@945 -- # kill 3242768 00:16:27.424 Received shutdown signal, test time was about 10.000000 seconds 00:16:27.424 00:16:27.424 Latency(us) 00:16:27.424 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.424 =================================================================================================================== 00:16:27.424 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.424 13:58:18 -- common/autotest_common.sh@950 -- # wait 3242768 00:16:27.683 13:58:18 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:27.683 13:58:18 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:27.683 13:58:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:27.683 13:58:18 -- nvmf/common.sh@116 -- # sync 00:16:27.684 13:58:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:27.684 13:58:18 -- nvmf/common.sh@119 -- # set +e 00:16:27.684 13:58:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:27.684 13:58:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:27.684 rmmod nvme_tcp 00:16:27.684 rmmod nvme_fabrics 00:16:27.684 rmmod nvme_keyring 00:16:27.684 13:58:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:27.684 13:58:18 -- nvmf/common.sh@123 -- # set -e 00:16:27.684 13:58:18 -- nvmf/common.sh@124 -- # return 0 00:16:27.684 13:58:18 -- nvmf/common.sh@477 -- # '[' -n 3242737 ']' 00:16:27.684 13:58:18 -- nvmf/common.sh@478 -- # killprocess 3242737 00:16:27.684 13:58:18 -- common/autotest_common.sh@926 -- # '[' -z 3242737 ']' 00:16:27.684 13:58:18 -- common/autotest_common.sh@930 -- # kill -0 3242737 00:16:27.684 13:58:18 -- common/autotest_common.sh@931 -- # uname 00:16:27.684 13:58:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:27.684 13:58:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3242737 00:16:27.684 13:58:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:27.684 13:58:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:27.684 13:58:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3242737' 00:16:27.684 killing process with pid 3242737 00:16:27.684 13:58:18 -- common/autotest_common.sh@945 -- # kill 3242737 00:16:27.684 13:58:18 -- common/autotest_common.sh@950 -- # wait 3242737 00:16:27.943 13:58:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:27.943 13:58:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:27.943 13:58:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:27.943 13:58:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:27.943 13:58:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:27.943 13:58:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.943 13:58:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:27.943 13:58:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.483 13:58:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:30.483 00:16:30.483 real 0m20.350s 00:16:30.483 user 0m24.975s 00:16:30.483 sys 0m5.670s 00:16:30.483 13:58:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.483 13:58:20 -- common/autotest_common.sh@10 -- # set +x 00:16:30.483 ************************************ 00:16:30.483 END TEST nvmf_queue_depth 00:16:30.483 ************************************ 00:16:30.483 13:58:20 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:30.483 13:58:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:30.483 13:58:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.483 13:58:20 -- common/autotest_common.sh@10 -- # set +x 00:16:30.483 ************************************ 00:16:30.483 START TEST nvmf_multipath 00:16:30.483 ************************************ 00:16:30.483 13:58:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:30.483 * Looking for test storage... 00:16:30.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.484 13:58:21 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.484 13:58:21 -- nvmf/common.sh@7 -- # uname -s 00:16:30.484 13:58:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.484 13:58:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.484 13:58:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.484 13:58:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.484 13:58:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.484 13:58:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.484 13:58:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.484 13:58:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.484 13:58:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.484 13:58:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.484 13:58:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.484 13:58:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.484 13:58:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.484 13:58:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.484 13:58:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:30.484 13:58:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.484 13:58:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.484 13:58:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.484 13:58:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.484 13:58:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.484 13:58:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.484 13:58:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.484 13:58:21 -- paths/export.sh@5 -- # export PATH 00:16:30.484 13:58:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.484 13:58:21 -- nvmf/common.sh@46 -- # : 0 00:16:30.484 13:58:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.484 13:58:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.484 13:58:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.484 13:58:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.484 13:58:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.484 13:58:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.484 13:58:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.484 13:58:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.484 13:58:21 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.484 13:58:21 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.484 13:58:21 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:30.484 13:58:21 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.484 13:58:21 -- target/multipath.sh@43 -- # nvmftestinit 00:16:30.484 13:58:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:30.484 13:58:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.484 13:58:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.484 13:58:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.484 13:58:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.484 13:58:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.484 13:58:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.484 13:58:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.484 13:58:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:30.484 13:58:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:30.484 13:58:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:30.484 13:58:21 -- common/autotest_common.sh@10 -- # set +x 00:16:35.754 13:58:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:35.754 13:58:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:35.754 13:58:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:35.754 13:58:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:35.754 13:58:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:35.754 13:58:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:35.754 13:58:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:35.754 13:58:26 -- nvmf/common.sh@294 -- # net_devs=() 00:16:35.754 13:58:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:35.754 13:58:26 -- nvmf/common.sh@295 -- # e810=() 00:16:35.754 13:58:26 -- nvmf/common.sh@295 -- # local -ga e810 00:16:35.754 13:58:26 -- nvmf/common.sh@296 -- # x722=() 00:16:35.754 13:58:26 -- nvmf/common.sh@296 -- # local -ga x722 00:16:35.754 13:58:26 -- nvmf/common.sh@297 -- # mlx=() 00:16:35.754 13:58:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:35.754 13:58:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.754 13:58:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:35.754 13:58:26 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:35.754 13:58:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:35.754 13:58:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:35.754 13:58:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:35.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:35.754 13:58:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:35.754 13:58:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:35.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:35.754 13:58:26 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:35.754 13:58:26 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:35.754 13:58:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:35.754 13:58:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.754 13:58:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:35.754 13:58:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.755 13:58:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:35.755 Found net devices under 0000:86:00.0: cvl_0_0 00:16:35.755 13:58:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.755 13:58:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:35.755 13:58:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.755 13:58:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:35.755 13:58:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.755 13:58:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:35.755 Found net devices under 0000:86:00.1: cvl_0_1 00:16:35.755 13:58:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.755 13:58:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:35.755 13:58:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:35.755 13:58:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:35.755 13:58:26 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:35.755 13:58:26 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:35.755 13:58:26 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.755 13:58:26 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.755 13:58:26 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.755 13:58:26 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:35.755 13:58:26 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.755 13:58:26 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.755 13:58:26 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:35.755 13:58:26 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.755 13:58:26 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.755 13:58:26 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:35.755 13:58:26 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:35.755 13:58:26 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.755 13:58:26 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.755 13:58:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.755 13:58:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.755 13:58:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:35.755 13:58:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.755 13:58:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.755 13:58:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.755 13:58:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:35.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:16:35.755 00:16:35.755 --- 10.0.0.2 ping statistics --- 00:16:35.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.755 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:16:35.755 13:58:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:16:35.755 00:16:35.755 --- 10.0.0.1 ping statistics --- 00:16:35.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.755 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:16:35.755 13:58:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.755 13:58:26 -- nvmf/common.sh@410 -- # return 0 00:16:35.755 13:58:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:35.755 13:58:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.755 13:58:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:35.755 13:58:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:35.755 13:58:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.755 13:58:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:35.755 13:58:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:35.755 13:58:26 -- target/multipath.sh@45 -- # '[' -z ']' 00:16:35.755 13:58:26 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:35.755 only one NIC for nvmf test 00:16:35.755 13:58:26 -- target/multipath.sh@47 -- # nvmftestfini 00:16:35.755 13:58:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:35.755 13:58:26 -- nvmf/common.sh@116 -- # sync 00:16:35.755 13:58:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:35.755 13:58:26 -- nvmf/common.sh@119 -- # set +e 00:16:35.755 13:58:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:35.755 13:58:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:35.755 rmmod nvme_tcp 00:16:35.755 rmmod nvme_fabrics 00:16:35.755 rmmod nvme_keyring 00:16:35.755 13:58:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:35.755 13:58:26 -- nvmf/common.sh@123 -- # set -e 00:16:35.755 13:58:26 -- nvmf/common.sh@124 -- # return 0 00:16:35.755 13:58:26 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:35.755 13:58:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:35.755 13:58:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:35.755 13:58:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:35.755 13:58:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:35.755 13:58:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:35.755 13:58:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.755 13:58:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:35.755 13:58:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.730 13:58:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:37.730 13:58:28 -- target/multipath.sh@48 -- # exit 0 00:16:37.730 13:58:28 -- target/multipath.sh@1 -- # nvmftestfini 00:16:37.730 13:58:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:37.730 13:58:28 -- nvmf/common.sh@116 -- # sync 00:16:37.730 13:58:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:37.730 13:58:28 -- nvmf/common.sh@119 -- # set +e 00:16:37.730 13:58:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:37.730 13:58:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:37.730 13:58:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:37.730 13:58:28 -- nvmf/common.sh@123 -- # set -e 00:16:37.730 13:58:28 -- nvmf/common.sh@124 -- # return 0 00:16:37.730 13:58:28 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:16:37.730 13:58:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:37.730 13:58:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:37.730 13:58:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:37.730 13:58:28 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:37.730 13:58:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:37.730 13:58:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.730 13:58:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.730 13:58:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.730 13:58:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:37.730 00:16:37.730 real 0m7.626s 00:16:37.730 user 0m1.525s 00:16:37.730 sys 0m4.095s 00:16:37.730 13:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:37.730 13:58:28 -- common/autotest_common.sh@10 -- # set +x 00:16:37.730 ************************************ 00:16:37.730 END TEST nvmf_multipath 00:16:37.730 ************************************ 00:16:37.730 13:58:28 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:37.730 13:58:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:37.730 13:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:37.730 13:58:28 -- common/autotest_common.sh@10 -- # set +x 00:16:37.730 ************************************ 00:16:37.730 START TEST nvmf_zcopy 00:16:37.730 ************************************ 00:16:37.730 13:58:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:37.730 * Looking for test storage... 00:16:37.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.730 13:58:28 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.730 13:58:28 -- nvmf/common.sh@7 -- # uname -s 00:16:37.730 13:58:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.730 13:58:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.730 13:58:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.730 13:58:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.730 13:58:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.730 13:58:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.730 13:58:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.730 13:58:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.730 13:58:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.730 13:58:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.988 13:58:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.988 13:58:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.988 13:58:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.988 13:58:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.988 13:58:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.988 13:58:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.988 13:58:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.988 13:58:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.988 13:58:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.988 13:58:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.988 13:58:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.988 13:58:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.988 13:58:28 -- paths/export.sh@5 -- # export PATH 00:16:37.988 13:58:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.988 13:58:28 -- nvmf/common.sh@46 -- # : 0 00:16:37.988 13:58:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:37.988 13:58:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:37.988 13:58:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:37.988 13:58:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.988 13:58:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.989 13:58:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:37.989 13:58:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:37.989 13:58:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:37.989 13:58:28 -- target/zcopy.sh@12 -- # nvmftestinit 00:16:37.989 13:58:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:37.989 13:58:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.989 13:58:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:37.989 13:58:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:37.989 13:58:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:37.989 13:58:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.989 13:58:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:37.989 13:58:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.989 13:58:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:37.989 13:58:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:37.989 13:58:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:37.989 13:58:28 -- common/autotest_common.sh@10 -- # set +x 00:16:43.261 13:58:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:43.261 13:58:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:43.261 13:58:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:43.261 13:58:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:43.261 13:58:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:43.261 13:58:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:43.261 13:58:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:43.261 13:58:33 -- nvmf/common.sh@294 -- # net_devs=() 00:16:43.261 13:58:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:43.261 13:58:33 -- nvmf/common.sh@295 -- # e810=() 00:16:43.261 13:58:33 -- nvmf/common.sh@295 -- # local -ga e810 00:16:43.261 13:58:33 -- nvmf/common.sh@296 -- # x722=() 00:16:43.261 13:58:33 -- nvmf/common.sh@296 -- # local -ga x722 00:16:43.261 13:58:33 -- nvmf/common.sh@297 -- # mlx=() 00:16:43.261 13:58:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:43.261 13:58:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.261 13:58:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:43.261 13:58:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:43.261 13:58:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:43.261 13:58:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:43.261 13:58:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:43.261 13:58:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:43.261 13:58:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:43.261 13:58:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:43.261 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:43.261 13:58:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:43.262 13:58:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:43.262 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:43.262 13:58:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:43.262 13:58:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:43.262 13:58:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.262 13:58:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:43.262 13:58:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.262 13:58:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:43.262 Found net devices under 0000:86:00.0: cvl_0_0 00:16:43.262 13:58:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.262 13:58:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:43.262 13:58:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.262 13:58:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:43.262 13:58:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.262 13:58:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:43.262 Found net devices under 0000:86:00.1: cvl_0_1 00:16:43.262 13:58:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.262 13:58:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:43.262 13:58:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:43.262 13:58:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:43.262 13:58:33 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.262 13:58:33 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.262 13:58:33 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.262 13:58:33 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:43.262 13:58:33 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.262 13:58:33 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.262 13:58:33 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:43.262 13:58:33 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.262 13:58:33 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.262 13:58:33 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:43.262 13:58:33 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:43.262 13:58:33 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.262 13:58:33 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.262 13:58:33 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.262 13:58:33 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.262 13:58:33 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:43.262 13:58:33 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.262 13:58:33 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.262 13:58:33 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.262 13:58:33 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:43.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:16:43.262 00:16:43.262 --- 10.0.0.2 ping statistics --- 00:16:43.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.262 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:16:43.262 13:58:33 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:16:43.262 00:16:43.262 --- 10.0.0.1 ping statistics --- 00:16:43.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.262 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:16:43.262 13:58:33 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.262 13:58:33 -- nvmf/common.sh@410 -- # return 0 00:16:43.262 13:58:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:43.262 13:58:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.262 13:58:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:43.262 13:58:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.262 13:58:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:43.262 13:58:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:43.262 13:58:33 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:43.262 13:58:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:43.262 13:58:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:43.262 13:58:33 -- common/autotest_common.sh@10 -- # set +x 00:16:43.262 13:58:34 -- nvmf/common.sh@469 -- # nvmfpid=3251546 00:16:43.262 13:58:34 -- nvmf/common.sh@470 -- # waitforlisten 3251546 00:16:43.262 13:58:34 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:43.262 13:58:34 -- common/autotest_common.sh@819 -- # '[' -z 3251546 ']' 00:16:43.262 13:58:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.262 13:58:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:43.262 13:58:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.262 13:58:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:43.262 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.262 [2024-07-23 13:58:34.048894] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:43.262 [2024-07-23 13:58:34.048936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.262 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.262 [2024-07-23 13:58:34.105572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.262 [2024-07-23 13:58:34.182188] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:43.262 [2024-07-23 13:58:34.182295] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.262 [2024-07-23 13:58:34.182304] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.263 [2024-07-23 13:58:34.182310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.263 [2024-07-23 13:58:34.182325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.912 13:58:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:43.912 13:58:34 -- common/autotest_common.sh@852 -- # return 0 00:16:43.912 13:58:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:43.912 13:58:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 13:58:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:43.912 13:58:34 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:43.912 13:58:34 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:43.912 13:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 [2024-07-23 13:58:34.877335] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:43.912 13:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.912 13:58:34 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:43.912 13:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 13:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.912 13:58:34 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.912 13:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 [2024-07-23 13:58:34.897469] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.912 13:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.912 13:58:34 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:43.912 13:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 13:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.912 13:58:34 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:43.912 13:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:43.912 malloc0 00:16:43.912 13:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.912 13:58:34 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:43.912 13:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.912 13:58:34 -- common/autotest_common.sh@10 -- # set +x 00:16:44.171 13:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.171 13:58:34 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:44.171 13:58:34 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:44.171 13:58:34 -- nvmf/common.sh@520 -- # config=() 00:16:44.171 13:58:34 -- nvmf/common.sh@520 -- # local subsystem config 00:16:44.171 13:58:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:44.171 13:58:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:44.171 { 00:16:44.171 "params": { 00:16:44.171 "name": "Nvme$subsystem", 00:16:44.171 "trtype": "$TEST_TRANSPORT", 00:16:44.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:44.171 "adrfam": "ipv4", 00:16:44.171 "trsvcid": "$NVMF_PORT", 00:16:44.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:44.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:44.171 "hdgst": ${hdgst:-false}, 00:16:44.171 "ddgst": ${ddgst:-false} 00:16:44.171 }, 00:16:44.171 "method": "bdev_nvme_attach_controller" 00:16:44.171 } 00:16:44.171 EOF 00:16:44.171 )") 00:16:44.171 13:58:34 -- nvmf/common.sh@542 -- # cat 00:16:44.171 13:58:34 -- nvmf/common.sh@544 -- # jq . 00:16:44.171 13:58:34 -- nvmf/common.sh@545 -- # IFS=, 00:16:44.172 13:58:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:44.172 "params": { 00:16:44.172 "name": "Nvme1", 00:16:44.172 "trtype": "tcp", 00:16:44.172 "traddr": "10.0.0.2", 00:16:44.172 "adrfam": "ipv4", 00:16:44.172 "trsvcid": "4420", 00:16:44.172 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.172 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:44.172 "hdgst": false, 00:16:44.172 "ddgst": false 00:16:44.172 }, 00:16:44.172 "method": "bdev_nvme_attach_controller" 00:16:44.172 }' 00:16:44.172 [2024-07-23 13:58:34.978899] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:44.172 [2024-07-23 13:58:34.978943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251720 ] 00:16:44.172 EAL: No free 2048 kB hugepages reported on node 1 00:16:44.172 [2024-07-23 13:58:35.034096] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:44.172 [2024-07-23 13:58:35.105140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.430 Running I/O for 10 seconds... 00:16:54.403 00:16:54.403 Latency(us) 00:16:54.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.403 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:54.403 Verification LBA range: start 0x0 length 0x1000 00:16:54.403 Nvme1n1 : 10.01 12859.93 100.47 0.00 0.00 9930.03 1310.72 31229.33 00:16:54.403 =================================================================================================================== 00:16:54.403 Total : 12859.93 100.47 0.00 0.00 9930.03 1310.72 31229.33 00:16:54.661 13:58:45 -- target/zcopy.sh@39 -- # perfpid=3253584 00:16:54.661 13:58:45 -- target/zcopy.sh@41 -- # xtrace_disable 00:16:54.661 13:58:45 -- common/autotest_common.sh@10 -- # set +x 00:16:54.661 13:58:45 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:54.661 13:58:45 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:54.661 13:58:45 -- nvmf/common.sh@520 -- # config=() 00:16:54.661 13:58:45 -- nvmf/common.sh@520 -- # local subsystem config 00:16:54.662 13:58:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:54.662 13:58:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:54.662 { 00:16:54.662 "params": { 00:16:54.662 "name": "Nvme$subsystem", 00:16:54.662 "trtype": "$TEST_TRANSPORT", 00:16:54.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:54.662 "adrfam": "ipv4", 00:16:54.662 "trsvcid": "$NVMF_PORT", 00:16:54.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:54.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:54.662 "hdgst": ${hdgst:-false}, 00:16:54.662 "ddgst": ${ddgst:-false} 00:16:54.662 }, 00:16:54.662 "method": "bdev_nvme_attach_controller" 00:16:54.662 } 00:16:54.662 EOF 00:16:54.662 )") 00:16:54.662 13:58:45 -- nvmf/common.sh@542 -- # cat 00:16:54.662 [2024-07-23 13:58:45.507966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.507996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 13:58:45 -- nvmf/common.sh@544 -- # jq . 00:16:54.662 13:58:45 -- nvmf/common.sh@545 -- # IFS=, 00:16:54.662 13:58:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:54.662 "params": { 00:16:54.662 "name": "Nvme1", 00:16:54.662 "trtype": "tcp", 00:16:54.662 "traddr": "10.0.0.2", 00:16:54.662 "adrfam": "ipv4", 00:16:54.662 "trsvcid": "4420", 00:16:54.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:54.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:54.662 "hdgst": false, 00:16:54.662 "ddgst": false 00:16:54.662 }, 00:16:54.662 "method": "bdev_nvme_attach_controller" 00:16:54.662 }' 00:16:54.662 [2024-07-23 13:58:45.519964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.519976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.527981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.527991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.536003] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.536013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.543023] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:54.662 [2024-07-23 13:58:45.543080] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253584 ] 00:16:54.662 [2024-07-23 13:58:45.544025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.544038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.552051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.552061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.564085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.564096] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 EAL: No free 2048 kB hugepages reported on node 1 00:16:54.662 [2024-07-23 13:58:45.572102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.572112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.580122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.580131] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.588144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.588153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.596167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.596176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.596830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:54.662 [2024-07-23 13:58:45.608202] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.608215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.616221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.616230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.624243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.624252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.632267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.632280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.640290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.640305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.652320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.652329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.660341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.660357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.667325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:54.662 [2024-07-23 13:58:45.668364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.668376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.662 [2024-07-23 13:58:45.676408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.662 [2024-07-23 13:58:45.676429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.684430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.684451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.696447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.696459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.708480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.708495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.720508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.720518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.732539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.732550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.740556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.740565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.752603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.752620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.760620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.760634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.768639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.768651] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.776658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.776671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.784678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.784688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.796712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.796721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.804734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.804743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.812758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.812769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.820783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.820796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.828805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.828818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.840842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.840855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.848868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.848885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.856884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.856895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 Running I/O for 5 seconds... 00:16:54.921 [2024-07-23 13:58:45.864906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.864916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.885783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.885803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.895982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.896001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.905740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.905801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.912850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.912869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.923066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.923084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.921 [2024-07-23 13:58:45.931664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.921 [2024-07-23 13:58:45.931682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:45.940969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:45.940990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:45.949683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:45.949701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:45.958205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:45.958224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:45.966557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:45.966575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:45.975538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:45.975556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:45.990278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:45.990296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:46.001836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:46.001854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:46.009822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:46.009840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:46.018142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:46.018161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:46.026988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:46.027006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.179 [2024-07-23 13:58:46.035981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.179 [2024-07-23 13:58:46.035999] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.044059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.044077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.053315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.053333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.061715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.061733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.070577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.070596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.084497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.084516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.091187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.091206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.101179] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.101198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.109429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.109447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.117891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.117909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.132192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.132211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.139032] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.139057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.148706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.148724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.156809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.156827] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.165427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.165445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.176570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.176588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.186389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.186407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.180 [2024-07-23 13:58:46.194875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.180 [2024-07-23 13:58:46.194895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.205193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.205211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.215462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.215480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.225864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.225883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.234996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.235014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.244737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.244754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.253374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.253392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.260614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.260632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.270500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.270518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.279040] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.279064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.288144] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.288163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.296344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.296362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.306551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.306569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.317337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.317355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.325533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.325551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.334301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.334319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.342596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.342614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.351336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.351354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.360152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.360171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.368891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.368913] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.378670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.378687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.386981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.387001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.438 [2024-07-23 13:58:46.394519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.438 [2024-07-23 13:58:46.394537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.439 [2024-07-23 13:58:46.403942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.439 [2024-07-23 13:58:46.403960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.439 [2024-07-23 13:58:46.412639] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.439 [2024-07-23 13:58:46.412658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.439 [2024-07-23 13:58:46.421430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.439 [2024-07-23 13:58:46.421448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.439 [2024-07-23 13:58:46.430088] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.439 [2024-07-23 13:58:46.430108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.439 [2024-07-23 13:58:46.438758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.439 [2024-07-23 13:58:46.438777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.439 [2024-07-23 13:58:46.447386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.439 [2024-07-23 13:58:46.447404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.458033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.458058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.469197] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.469215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.477868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.477887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.484902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.484921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.494857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.494876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.503943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.503963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.512805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.512825] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.521233] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.521252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.530506] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.530524] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.544364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.544387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.552031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.552056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.560926] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.560944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.569738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.569757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.578533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.578552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.592695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.592714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.600688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.600707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.607612] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.607631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.617696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.617716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.626491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.626509] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.635470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.635488] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.644766] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.644785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.653079] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.653107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.662074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.662094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.671050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.671068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.684517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.684536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.691405] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.691424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.701439] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.701457] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.698 [2024-07-23 13:58:46.710021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.698 [2024-07-23 13:58:46.710040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.956 [2024-07-23 13:58:46.718365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.956 [2024-07-23 13:58:46.718388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.956 [2024-07-23 13:58:46.726823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.956 [2024-07-23 13:58:46.726841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.956 [2024-07-23 13:58:46.734223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.956 [2024-07-23 13:58:46.734241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.956 [2024-07-23 13:58:46.744068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.956 [2024-07-23 13:58:46.744088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.956 [2024-07-23 13:58:46.751952] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.956 [2024-07-23 13:58:46.751971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.956 [2024-07-23 13:58:46.760549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.760567] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.774744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.774763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.782746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.782765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.790565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.790583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.799298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.799316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.808015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.808034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.816960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.816978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.826068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.826086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.834831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.834849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.843120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.843138] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.851582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.851600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.860515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.860535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.868812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.868831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.877675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.877694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.886315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.886337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.894833] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.894851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.904268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.904286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.913727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.913746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.922387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.922406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.930869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.930888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.939799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.939817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.953633] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.953652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.960832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.960850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.957 [2024-07-23 13:58:46.970420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.957 [2024-07-23 13:58:46.970438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:46.979184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:46.979203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:46.987797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:46.987815] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.001459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.001477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.010259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.010278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.019131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.019150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.028175] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.028194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.036881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.036899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.045737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.045755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.054199] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.054217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.062943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.062961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.071677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.071695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.080434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.080451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.088627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.088645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.097842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.097861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.107973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.107991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.116629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.116647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.123842] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.123860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.138237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.138256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.146793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.146811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.155605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.155623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.164075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.164093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.173148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.173167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.187091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.187111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.195472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.195491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.204256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.204276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.213281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.213299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.222239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.222258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.216 [2024-07-23 13:58:47.230743] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.216 [2024-07-23 13:58:47.230761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.239141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.239160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.248101] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.248119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.256668] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.256686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.265443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.265461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.274060] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.274078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.283417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.283436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.292277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.292296] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.300916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.300935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.309956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.309974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.323863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.323882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.330355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.330373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.341001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.341020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.349592] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.349610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.357971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.357988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.367211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.367230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.375729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.375747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.384295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.384313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.392699] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.392716] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.401240] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.401257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.409987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.410005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.418228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.418246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.426910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.426927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.435691] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.435710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.444142] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.444160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.458192] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.458212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.466786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.466803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.475734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.475752] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.475 [2024-07-23 13:58:47.484687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.475 [2024-07-23 13:58:47.484705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.734 [2024-07-23 13:58:47.493318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.734 [2024-07-23 13:58:47.493338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.734 [2024-07-23 13:58:47.501911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.734 [2024-07-23 13:58:47.501929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.510967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.510986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.519412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.519430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.527801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.527819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.536270] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.536289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.544844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.544862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.553222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.553240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.561494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.561512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.570468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.570486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.579401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.579419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.587959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.587977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.596902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.596920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.605143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.605161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.613918] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.613936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.620969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.620987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.631103] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.631121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.640039] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.640063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.648693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.648710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.657170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.657188] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.666021] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.666039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.674635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.674653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.683247] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.683265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.692211] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.692230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.701296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.701314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.709727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.709745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.723380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.723399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.731773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.731791] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.740130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.740152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.735 [2024-07-23 13:58:47.749212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.735 [2024-07-23 13:58:47.749231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.758227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.758246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.767548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.767566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.776061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.776080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.784892] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.784910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.794115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.794134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.803228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.803246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.812587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.812606] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.820913] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.820932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.829789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.829807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.838137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.838155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.846937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.846955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.860726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.860745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.869174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.869193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.878187] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.878206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.887325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.887344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.895516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.895536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.904327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.904346] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.912759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.912781] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.921243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.921261] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.930379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.930398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.939148] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.939167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.947961] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.947981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.956727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.956747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.965280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.965299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.973781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.973800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.982160] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.982178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.991118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.991137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:47.999812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:47.999831] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.994 [2024-07-23 13:58:48.008832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.994 [2024-07-23 13:58:48.008851] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.017862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.017882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.026620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.026639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.040546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.040565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.047463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.047481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.057465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.057483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.065790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.065808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.075212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.075230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.084242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.084265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.092641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.092660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.101523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.101542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.109951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.109970] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.119569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.119589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.133551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.133571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.142054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.142073] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.150868] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.150887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.160273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.160291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.169115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.169134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.178267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.178285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.187114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.187133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.195876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.195895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.204711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.204730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.213488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.213506] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.227451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.227470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.235680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.235698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.244635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.244654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.253515] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.253534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.253 [2024-07-23 13:58:48.261832] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.253 [2024-07-23 13:58:48.261854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.511 [2024-07-23 13:58:48.278630] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.511 [2024-07-23 13:58:48.278649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.289573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.289591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.298683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.298702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.307544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.307563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.316333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.316351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.324656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.324674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.332885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.332903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.341785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.341804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.350123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.350141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.358977] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.358995] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.368355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.368374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.377170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.377191] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.385601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.385618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.393972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.393991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.402827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.402845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.416348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.416366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.425163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.425181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.434061] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.434079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.442545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.442563] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.450815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.450832] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.459123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.459143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.467564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.467585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.476482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.476502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.485237] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.485255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.493882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.493900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.507872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.507891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.515308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.515326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.512 [2024-07-23 13:58:48.522937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.512 [2024-07-23 13:58:48.522956] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.531899] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.531918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.540469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.540487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.554808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.554826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.563498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.563516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.571687] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.571706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.580138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.580156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.588796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.588814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.609879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.609897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.618586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.618604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.625756] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.625774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.634779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.634797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.642380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.642398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.652312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.652331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.661161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.661179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.669884] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.669903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.680090] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.680108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.688408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.688426] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.700292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.700310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.707094] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.707112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.717728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.717747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.726400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.726418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.735306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.735324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.770 [2024-07-23 13:58:48.748997] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.770 [2024-07-23 13:58:48.749016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.771 [2024-07-23 13:58:48.756376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.771 [2024-07-23 13:58:48.756395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.771 [2024-07-23 13:58:48.765065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.771 [2024-07-23 13:58:48.765084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.771 [2024-07-23 13:58:48.773319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.771 [2024-07-23 13:58:48.773337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.771 [2024-07-23 13:58:48.782737] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.771 [2024-07-23 13:58:48.782763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.791720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.791740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.800549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.800569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.811451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.811469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.821546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.821564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.830436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.830454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.849696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.849714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.858785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.858803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.867677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.867696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.876666] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.876684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.885508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.885526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.894574] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.894593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.903569] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.903587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.912419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.912438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.920577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.920596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.928914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.928933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.937791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.937810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.946386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.946404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.956201] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.956220] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.964768] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.964787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.973984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.974002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.982959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.982978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:48.991801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:48.991819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:49.001086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:49.001105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:49.009565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:49.009583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:49.018357] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:49.018375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:49.027196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:49.027214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:49.035678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:49.035696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.030 [2024-07-23 13:58:49.044720] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.030 [2024-07-23 13:58:49.044739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.288 [2024-07-23 13:58:49.053057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.288 [2024-07-23 13:58:49.053091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.288 [2024-07-23 13:58:49.061286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.061304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.074840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.074859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.081864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.081883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.090951] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.090969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.099276] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.099294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.108228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.108246] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.117826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.117844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.126121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.126139] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.134621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.134639] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.143301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.143323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.151751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.151769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.160909] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.160927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.169266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.169284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.177885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.177903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.186254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.186272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.194087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.194105] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.203504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.203522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.212016] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.212033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.220342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.220360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.228570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.228587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.237369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.237387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.246498] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.246516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.255205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.255223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.263759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.263778] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.272683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.272702] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.281935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.281953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.290790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.290809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.289 [2024-07-23 13:58:49.300050] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.289 [2024-07-23 13:58:49.300070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.308648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.308672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.318068] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.318087] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.326184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.326203] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.340188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.340208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.347540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.347559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.356648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.356667] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.365995] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.366014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.374831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.374850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.388735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.388754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.547 [2024-07-23 13:58:49.395704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.547 [2024-07-23 13:58:49.395722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.405048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.405067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.413749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.413768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.422685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.422704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.432018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.432037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.441153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.441172] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.449675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.449693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.458597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.458616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.466852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.466871] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.480531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.480551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.488499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.488523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.496120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.496140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.505917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.505936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.514564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.514582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.528688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.528707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.537195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.537214] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.545371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.545389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.554330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.554349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.548 [2024-07-23 13:58:49.562822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.548 [2024-07-23 13:58:49.562841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.576502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.576522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.584017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.584036] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.592644] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.592663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.601676] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.601695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.610264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.610282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.619294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.619312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.627660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.627678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.636709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.636728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.645528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.645547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.654552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.654571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.668117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.668140] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.675532] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.675550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.682130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.682148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.692862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.692880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.701802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.701821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.719778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.719797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.728294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.806 [2024-07-23 13:58:49.728312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.806 [2024-07-23 13:58:49.737408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.737427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.745597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.745614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.754465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.754483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.762776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.762794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.770936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.770954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.779830] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.779848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.788680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.788697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.797446] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.797463] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.806271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.806289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.807 [2024-07-23 13:58:49.814771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.807 [2024-07-23 13:58:49.814788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.824673] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.824692] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.834991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.835009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.850314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.850332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.858616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.858634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.867083] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.867101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.875799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.875817] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.886058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.886077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.896529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.896547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.905580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.905597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.914803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.914820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.923324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.923341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.932398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.932416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.941296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.941313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.950277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.950294] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.958831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.958849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.967602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.967620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.976205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.976223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:49.994131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:49.994150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.004084] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.004102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.014901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.014921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.023316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.023335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.038948] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.038969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.053994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.054014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.062225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.062244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.071223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.071242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.065 [2024-07-23 13:58:50.079741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.065 [2024-07-23 13:58:50.079767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.088843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.088861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.103809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.103828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.114946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.114965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.122403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.122421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.131928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.131947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.140901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.140919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.154712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.154731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.162024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.162049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.168986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.169004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.179685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.179704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.188292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.188310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.197694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.197712] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.206557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.206575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.214735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.214754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.223749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.223766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.230286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.230303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.245636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.245655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.253822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.253841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.264568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.264585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.275728] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.275746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.284785] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.324 [2024-07-23 13:58:50.284803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.324 [2024-07-23 13:58:50.298789] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.325 [2024-07-23 13:58:50.298808] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.325 [2024-07-23 13:58:50.305949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.325 [2024-07-23 13:58:50.305968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.325 [2024-07-23 13:58:50.314015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.325 [2024-07-23 13:58:50.314033] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.325 [2024-07-23 13:58:50.323318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.325 [2024-07-23 13:58:50.323337] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.325 [2024-07-23 13:58:50.331888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.325 [2024-07-23 13:58:50.331907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.341190] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.341212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.350076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.350095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.361660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.361678] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.371133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.371151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.381486] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.381504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.392783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.392801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.400349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.400368] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.410329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.410348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.419342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.419360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.426710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.426727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.441256] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.441275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.450834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.450852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.460967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.460985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.469681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.469700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.476251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.476269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.491477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.491495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.500070] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.500091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.508624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.508644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.517815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.517834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.528815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.528833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.544445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.544464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.552542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.552560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.562429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.562448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.571682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.571700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.579457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.579475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.584 [2024-07-23 13:58:50.594033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.584 [2024-07-23 13:58:50.594061] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.605236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.605255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.613810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.613828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.622528] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.622547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.631507] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.631525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.645681] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.645700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.654479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.654498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.662967] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.662986] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.671585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.671603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.680938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.680958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.695322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.695343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.703864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.703884] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.712215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.712233] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.721137] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.721155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.730303] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.730321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.744474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.744493] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.751321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.751339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.761999] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.762017] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.770705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.770724] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.779707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.779730] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.793755] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.793774] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.802128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.802146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.811059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.811078] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.820095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.820114] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.828935] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.828952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.843 [2024-07-23 13:58:50.837861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.843 [2024-07-23 13:58:50.837879] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.844 [2024-07-23 13:58:50.846245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.844 [2024-07-23 13:58:50.846263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.844 [2024-07-23 13:58:50.855038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.844 [2024-07-23 13:58:50.855062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.864109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.864128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.873723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.873743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 00:17:00.103 Latency(us) 00:17:00.103 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.103 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:00.103 Nvme1n1 : 5.01 16766.64 130.99 0.00 0.00 7628.76 2065.81 31913.18 00:17:00.103 =================================================================================================================== 00:17:00.103 Total : 16766.64 130.99 0.00 0.00 7628.76 2065.81 31913.18 00:17:00.103 [2024-07-23 13:58:50.883859] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.883876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.891875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.891890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.899893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.899904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.907921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.907934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.915946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.915961] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.927973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.927991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.935992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.936004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.944013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.944025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.952035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.952052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.960062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.960074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.972096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.972108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.980111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.980122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.988136] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.988146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:50.996174] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:50.996183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.004195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.004206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.016231] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.016242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.028266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.028278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.036280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.036290] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.044302] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.044312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.052320] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.052329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.064358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.064371] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.072379] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.072389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.080396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.080406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 [2024-07-23 13:58:51.088419] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:00.103 [2024-07-23 13:58:51.088428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:00.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3253584) - No such process 00:17:00.103 13:58:51 -- target/zcopy.sh@49 -- # wait 3253584 00:17:00.103 13:58:51 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:00.103 13:58:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.103 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:17:00.103 13:58:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.103 13:58:51 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:00.103 13:58:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.103 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:17:00.103 delay0 00:17:00.103 13:58:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.103 13:58:51 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:00.103 13:58:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.103 13:58:51 -- common/autotest_common.sh@10 -- # set +x 00:17:00.362 13:58:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.362 13:58:51 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:00.362 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.362 [2024-07-23 13:58:51.218281] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:06.917 Initializing NVMe Controllers 00:17:06.917 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:06.917 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:06.917 Initialization complete. Launching workers. 00:17:06.917 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 103 00:17:06.917 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 386, failed to submit 37 00:17:06.917 success 221, unsuccess 165, failed 0 00:17:06.917 13:58:57 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:06.917 13:58:57 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:06.917 13:58:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:06.917 13:58:57 -- nvmf/common.sh@116 -- # sync 00:17:06.917 13:58:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:06.917 13:58:57 -- nvmf/common.sh@119 -- # set +e 00:17:06.917 13:58:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:06.917 13:58:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:06.917 rmmod nvme_tcp 00:17:06.917 rmmod nvme_fabrics 00:17:06.917 rmmod nvme_keyring 00:17:06.917 13:58:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:06.917 13:58:57 -- nvmf/common.sh@123 -- # set -e 00:17:06.917 13:58:57 -- nvmf/common.sh@124 -- # return 0 00:17:06.917 13:58:57 -- nvmf/common.sh@477 -- # '[' -n 3251546 ']' 00:17:06.917 13:58:57 -- nvmf/common.sh@478 -- # killprocess 3251546 00:17:06.917 13:58:57 -- common/autotest_common.sh@926 -- # '[' -z 3251546 ']' 00:17:06.917 13:58:57 -- common/autotest_common.sh@930 -- # kill -0 3251546 00:17:06.917 13:58:57 -- common/autotest_common.sh@931 -- # uname 00:17:06.917 13:58:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.917 13:58:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3251546 00:17:06.917 13:58:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:06.917 13:58:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:06.917 13:58:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3251546' 00:17:06.917 killing process with pid 3251546 00:17:06.917 13:58:57 -- common/autotest_common.sh@945 -- # kill 3251546 00:17:06.917 13:58:57 -- common/autotest_common.sh@950 -- # wait 3251546 00:17:06.917 13:58:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:06.917 13:58:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:06.917 13:58:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:06.917 13:58:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:06.917 13:58:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:06.917 13:58:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:06.917 13:58:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:06.917 13:58:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.882 13:58:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:08.882 00:17:08.882 real 0m31.102s 00:17:08.882 user 0m42.644s 00:17:08.882 sys 0m10.319s 00:17:08.882 13:58:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:08.882 13:58:59 -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 ************************************ 00:17:08.882 END TEST nvmf_zcopy 00:17:08.882 ************************************ 00:17:08.882 13:58:59 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:08.882 13:58:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:08.882 13:58:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:08.882 13:58:59 -- common/autotest_common.sh@10 -- # set +x 00:17:08.882 ************************************ 00:17:08.882 START TEST nvmf_nmic 00:17:08.882 ************************************ 00:17:08.882 13:58:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:08.882 * Looking for test storage... 00:17:08.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.882 13:58:59 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.882 13:58:59 -- nvmf/common.sh@7 -- # uname -s 00:17:08.882 13:58:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.882 13:58:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.882 13:58:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.882 13:58:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.882 13:58:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.882 13:58:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.882 13:58:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.882 13:58:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.882 13:58:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.882 13:58:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.882 13:58:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.882 13:58:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.882 13:58:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.882 13:58:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.882 13:58:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.882 13:58:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:09.140 13:58:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.140 13:58:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.140 13:58:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.140 13:58:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.140 13:58:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.140 13:58:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.140 13:58:59 -- paths/export.sh@5 -- # export PATH 00:17:09.140 13:58:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.140 13:58:59 -- nvmf/common.sh@46 -- # : 0 00:17:09.140 13:58:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:09.140 13:58:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:09.140 13:58:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:09.140 13:58:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.140 13:58:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.140 13:58:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:09.140 13:58:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:09.140 13:58:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:09.140 13:58:59 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:09.140 13:58:59 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:09.140 13:58:59 -- target/nmic.sh@14 -- # nvmftestinit 00:17:09.140 13:58:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:09.140 13:58:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.140 13:58:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:09.140 13:58:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:09.140 13:58:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:09.140 13:58:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.140 13:58:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.140 13:58:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.140 13:58:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:09.140 13:58:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:09.140 13:58:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:09.140 13:58:59 -- common/autotest_common.sh@10 -- # set +x 00:17:14.406 13:59:05 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:14.406 13:59:05 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:14.406 13:59:05 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:14.406 13:59:05 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:14.406 13:59:05 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:14.406 13:59:05 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:14.406 13:59:05 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:14.406 13:59:05 -- nvmf/common.sh@294 -- # net_devs=() 00:17:14.406 13:59:05 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:14.406 13:59:05 -- nvmf/common.sh@295 -- # e810=() 00:17:14.406 13:59:05 -- nvmf/common.sh@295 -- # local -ga e810 00:17:14.406 13:59:05 -- nvmf/common.sh@296 -- # x722=() 00:17:14.406 13:59:05 -- nvmf/common.sh@296 -- # local -ga x722 00:17:14.406 13:59:05 -- nvmf/common.sh@297 -- # mlx=() 00:17:14.406 13:59:05 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:14.406 13:59:05 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.406 13:59:05 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:14.406 13:59:05 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:14.406 13:59:05 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:14.406 13:59:05 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:14.406 13:59:05 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:14.406 13:59:05 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:14.406 13:59:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:14.406 13:59:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:14.406 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:14.406 13:59:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:14.406 13:59:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:14.406 13:59:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:14.407 13:59:05 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:14.407 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:14.407 13:59:05 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:14.407 13:59:05 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:14.407 13:59:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.407 13:59:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:14.407 13:59:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.407 13:59:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:14.407 Found net devices under 0000:86:00.0: cvl_0_0 00:17:14.407 13:59:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.407 13:59:05 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:14.407 13:59:05 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.407 13:59:05 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:14.407 13:59:05 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.407 13:59:05 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:14.407 Found net devices under 0000:86:00.1: cvl_0_1 00:17:14.407 13:59:05 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.407 13:59:05 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:14.407 13:59:05 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:14.407 13:59:05 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:14.407 13:59:05 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:14.407 13:59:05 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.407 13:59:05 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.407 13:59:05 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.407 13:59:05 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:14.407 13:59:05 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.407 13:59:05 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.407 13:59:05 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:14.407 13:59:05 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.407 13:59:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.407 13:59:05 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:14.407 13:59:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:14.407 13:59:05 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.407 13:59:05 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.407 13:59:05 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.407 13:59:05 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.407 13:59:05 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:14.407 13:59:05 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.666 13:59:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.666 13:59:05 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.666 13:59:05 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:14.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:17:14.666 00:17:14.666 --- 10.0.0.2 ping statistics --- 00:17:14.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.666 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:14.666 13:59:05 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.666 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.666 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:17:14.666 00:17:14.666 --- 10.0.0.1 ping statistics --- 00:17:14.666 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.666 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:17:14.666 13:59:05 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.666 13:59:05 -- nvmf/common.sh@410 -- # return 0 00:17:14.666 13:59:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:14.666 13:59:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.666 13:59:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:14.666 13:59:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:14.666 13:59:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.666 13:59:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:14.666 13:59:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:14.666 13:59:05 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:14.666 13:59:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:14.666 13:59:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:14.666 13:59:05 -- common/autotest_common.sh@10 -- # set +x 00:17:14.666 13:59:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:14.666 13:59:05 -- nvmf/common.sh@469 -- # nvmfpid=3258989 00:17:14.666 13:59:05 -- nvmf/common.sh@470 -- # waitforlisten 3258989 00:17:14.666 13:59:05 -- common/autotest_common.sh@819 -- # '[' -z 3258989 ']' 00:17:14.666 13:59:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.666 13:59:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:14.666 13:59:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.666 13:59:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:14.666 13:59:05 -- common/autotest_common.sh@10 -- # set +x 00:17:14.666 [2024-07-23 13:59:05.600636] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:14.666 [2024-07-23 13:59:05.600679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.666 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.666 [2024-07-23 13:59:05.658055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:14.924 [2024-07-23 13:59:05.731720] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:14.924 [2024-07-23 13:59:05.731833] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.924 [2024-07-23 13:59:05.731842] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.925 [2024-07-23 13:59:05.731849] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.925 [2024-07-23 13:59:05.731898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.925 [2024-07-23 13:59:05.731993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.925 [2024-07-23 13:59:05.732077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.925 [2024-07-23 13:59:05.732079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:15.492 13:59:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:15.492 13:59:06 -- common/autotest_common.sh@852 -- # return 0 00:17:15.492 13:59:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:15.492 13:59:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:15.492 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.492 13:59:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:15.492 13:59:06 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:15.492 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.492 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.492 [2024-07-23 13:59:06.462380] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:15.492 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.492 13:59:06 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:15.492 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.492 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.492 Malloc0 00:17:15.492 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.492 13:59:06 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:15.492 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.492 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.492 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.492 13:59:06 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:15.492 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.492 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.751 13:59:06 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.751 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.751 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 [2024-07-23 13:59:06.514058] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.751 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.751 13:59:06 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:15.751 test case1: single bdev can't be used in multiple subsystems 00:17:15.751 13:59:06 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:15.751 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.751 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.751 13:59:06 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:15.751 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.751 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.751 13:59:06 -- target/nmic.sh@28 -- # nmic_status=0 00:17:15.751 13:59:06 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:15.751 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.751 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.751 [2024-07-23 13:59:06.541980] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:15.751 [2024-07-23 13:59:06.541998] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:15.751 [2024-07-23 13:59:06.542005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:15.751 request: 00:17:15.751 { 00:17:15.751 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:15.751 "namespace": { 00:17:15.751 "bdev_name": "Malloc0" 00:17:15.751 }, 00:17:15.751 "method": "nvmf_subsystem_add_ns", 00:17:15.751 "req_id": 1 00:17:15.751 } 00:17:15.751 Got JSON-RPC error response 00:17:15.751 response: 00:17:15.751 { 00:17:15.752 "code": -32602, 00:17:15.752 "message": "Invalid parameters" 00:17:15.752 } 00:17:15.752 13:59:06 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:17:15.752 13:59:06 -- target/nmic.sh@29 -- # nmic_status=1 00:17:15.752 13:59:06 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:15.752 13:59:06 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:15.752 Adding namespace failed - expected result. 00:17:15.752 13:59:06 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:15.752 test case2: host connect to nvmf target in multiple paths 00:17:15.752 13:59:06 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:15.752 13:59:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:15.752 13:59:06 -- common/autotest_common.sh@10 -- # set +x 00:17:15.752 [2024-07-23 13:59:06.554127] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:15.752 13:59:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:15.752 13:59:06 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:17.127 13:59:07 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:18.061 13:59:08 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.061 13:59:08 -- common/autotest_common.sh@1177 -- # local i=0 00:17:18.061 13:59:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.061 13:59:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:17:18.061 13:59:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:19.962 13:59:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:19.962 13:59:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:19.962 13:59:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:19.962 13:59:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:17:19.963 13:59:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:19.963 13:59:10 -- common/autotest_common.sh@1187 -- # return 0 00:17:19.963 13:59:10 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:19.963 [global] 00:17:19.963 thread=1 00:17:19.963 invalidate=1 00:17:19.963 rw=write 00:17:19.963 time_based=1 00:17:19.963 runtime=1 00:17:19.963 ioengine=libaio 00:17:19.963 direct=1 00:17:19.963 bs=4096 00:17:19.963 iodepth=1 00:17:19.963 norandommap=0 00:17:19.963 numjobs=1 00:17:19.963 00:17:19.963 verify_dump=1 00:17:19.963 verify_backlog=512 00:17:19.963 verify_state_save=0 00:17:19.963 do_verify=1 00:17:19.963 verify=crc32c-intel 00:17:19.963 [job0] 00:17:19.963 filename=/dev/nvme0n1 00:17:19.963 Could not set queue depth (nvme0n1) 00:17:20.222 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:20.222 fio-3.35 00:17:20.222 Starting 1 thread 00:17:21.602 00:17:21.602 job0: (groupid=0, jobs=1): err= 0: pid=3260081: Tue Jul 23 13:59:12 2024 00:17:21.602 read: IOPS=747, BW=2989KiB/s (3061kB/s)(3052KiB/1021msec) 00:17:21.602 slat (nsec): min=7066, max=53669, avg=8873.39, stdev=2637.56 00:17:21.602 clat (usec): min=341, max=42065, avg=886.84, stdev=3955.86 00:17:21.602 lat (usec): min=350, max=42088, avg=895.71, stdev=3956.97 00:17:21.602 clat percentiles (usec): 00:17:21.602 | 1.00th=[ 363], 5.00th=[ 388], 10.00th=[ 437], 20.00th=[ 449], 00:17:21.602 | 30.00th=[ 469], 40.00th=[ 515], 50.00th=[ 519], 60.00th=[ 523], 00:17:21.602 | 70.00th=[ 529], 80.00th=[ 537], 90.00th=[ 553], 95.00th=[ 603], 00:17:21.602 | 99.00th=[ 3687], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:21.602 | 99.99th=[42206] 00:17:21.602 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:17:21.602 slat (usec): min=11, max=27512, avg=40.09, stdev=859.35 00:17:21.602 clat (usec): min=192, max=667, avg=283.05, stdev=75.11 00:17:21.602 lat (usec): min=227, max=28133, avg=323.14, stdev=873.12 00:17:21.602 clat percentiles (usec): 00:17:21.602 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:17:21.602 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 258], 60.00th=[ 265], 00:17:21.602 | 70.00th=[ 285], 80.00th=[ 318], 90.00th=[ 392], 95.00th=[ 486], 00:17:21.602 | 99.00th=[ 510], 99.50th=[ 523], 99.90th=[ 619], 99.95th=[ 668], 00:17:21.602 | 99.99th=[ 668] 00:17:21.602 bw ( KiB/s): min= 2960, max= 5232, per=100.00%, avg=4096.00, stdev=1606.55, samples=2 00:17:21.602 iops : min= 740, max= 1308, avg=1024.00, stdev=401.64, samples=2 00:17:21.602 lat (usec) : 250=25.46%, 500=44.21%, 750=29.83% 00:17:21.602 lat (msec) : 2=0.06%, 4=0.06%, 50=0.39% 00:17:21.602 cpu : usr=2.16%, sys=2.55%, ctx=1790, majf=0, minf=2 00:17:21.602 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:21.602 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.602 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:21.602 issued rwts: total=763,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:21.602 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:21.602 00:17:21.602 Run status group 0 (all jobs): 00:17:21.602 READ: bw=2989KiB/s (3061kB/s), 2989KiB/s-2989KiB/s (3061kB/s-3061kB/s), io=3052KiB (3125kB), run=1021-1021msec 00:17:21.602 WRITE: bw=4012KiB/s (4108kB/s), 4012KiB/s-4012KiB/s (4108kB/s-4108kB/s), io=4096KiB (4194kB), run=1021-1021msec 00:17:21.602 00:17:21.602 Disk stats (read/write): 00:17:21.602 nvme0n1: ios=785/1024, merge=0/0, ticks=1518/279, in_queue=1797, util=98.80% 00:17:21.602 13:59:12 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:21.602 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:21.602 13:59:12 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:21.602 13:59:12 -- common/autotest_common.sh@1198 -- # local i=0 00:17:21.602 13:59:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:21.602 13:59:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.602 13:59:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:21.602 13:59:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:21.602 13:59:12 -- common/autotest_common.sh@1210 -- # return 0 00:17:21.602 13:59:12 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:21.602 13:59:12 -- target/nmic.sh@53 -- # nvmftestfini 00:17:21.602 13:59:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:21.602 13:59:12 -- nvmf/common.sh@116 -- # sync 00:17:21.602 13:59:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:21.602 13:59:12 -- nvmf/common.sh@119 -- # set +e 00:17:21.602 13:59:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:21.602 13:59:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:21.602 rmmod nvme_tcp 00:17:21.602 rmmod nvme_fabrics 00:17:21.602 rmmod nvme_keyring 00:17:21.602 13:59:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:21.602 13:59:12 -- nvmf/common.sh@123 -- # set -e 00:17:21.602 13:59:12 -- nvmf/common.sh@124 -- # return 0 00:17:21.602 13:59:12 -- nvmf/common.sh@477 -- # '[' -n 3258989 ']' 00:17:21.602 13:59:12 -- nvmf/common.sh@478 -- # killprocess 3258989 00:17:21.602 13:59:12 -- common/autotest_common.sh@926 -- # '[' -z 3258989 ']' 00:17:21.602 13:59:12 -- common/autotest_common.sh@930 -- # kill -0 3258989 00:17:21.602 13:59:12 -- common/autotest_common.sh@931 -- # uname 00:17:21.862 13:59:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.862 13:59:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3258989 00:17:21.862 13:59:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:21.862 13:59:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:21.862 13:59:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3258989' 00:17:21.862 killing process with pid 3258989 00:17:21.862 13:59:12 -- common/autotest_common.sh@945 -- # kill 3258989 00:17:21.862 13:59:12 -- common/autotest_common.sh@950 -- # wait 3258989 00:17:22.121 13:59:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:22.121 13:59:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:22.121 13:59:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:22.121 13:59:12 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:22.121 13:59:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:22.121 13:59:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.121 13:59:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:22.121 13:59:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.027 13:59:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:24.027 00:17:24.027 real 0m15.153s 00:17:24.027 user 0m35.168s 00:17:24.027 sys 0m4.967s 00:17:24.027 13:59:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.027 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:17:24.027 ************************************ 00:17:24.027 END TEST nvmf_nmic 00:17:24.027 ************************************ 00:17:24.027 13:59:14 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:24.027 13:59:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:24.027 13:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:24.027 13:59:14 -- common/autotest_common.sh@10 -- # set +x 00:17:24.027 ************************************ 00:17:24.027 START TEST nvmf_fio_target 00:17:24.027 ************************************ 00:17:24.027 13:59:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:24.287 * Looking for test storage... 00:17:24.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.287 13:59:15 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.287 13:59:15 -- nvmf/common.sh@7 -- # uname -s 00:17:24.287 13:59:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.287 13:59:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.287 13:59:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.287 13:59:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.287 13:59:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:24.287 13:59:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:24.287 13:59:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.287 13:59:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:24.287 13:59:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.287 13:59:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:24.287 13:59:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.287 13:59:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.287 13:59:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.287 13:59:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:24.287 13:59:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:24.287 13:59:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.287 13:59:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.287 13:59:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.287 13:59:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.287 13:59:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.287 13:59:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.287 13:59:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.287 13:59:15 -- paths/export.sh@5 -- # export PATH 00:17:24.287 13:59:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.287 13:59:15 -- nvmf/common.sh@46 -- # : 0 00:17:24.287 13:59:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:24.287 13:59:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:24.287 13:59:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:24.287 13:59:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.287 13:59:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.287 13:59:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:24.287 13:59:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:24.287 13:59:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:24.287 13:59:15 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:24.287 13:59:15 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:24.287 13:59:15 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.287 13:59:15 -- target/fio.sh@16 -- # nvmftestinit 00:17:24.287 13:59:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:24.287 13:59:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.287 13:59:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:24.287 13:59:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:24.287 13:59:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:24.287 13:59:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.287 13:59:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.287 13:59:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:24.287 13:59:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:24.287 13:59:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:24.287 13:59:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:24.287 13:59:15 -- common/autotest_common.sh@10 -- # set +x 00:17:29.559 13:59:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:29.559 13:59:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:29.559 13:59:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:29.559 13:59:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:29.559 13:59:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:29.559 13:59:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:29.559 13:59:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:29.559 13:59:20 -- nvmf/common.sh@294 -- # net_devs=() 00:17:29.559 13:59:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:29.559 13:59:20 -- nvmf/common.sh@295 -- # e810=() 00:17:29.559 13:59:20 -- nvmf/common.sh@295 -- # local -ga e810 00:17:29.559 13:59:20 -- nvmf/common.sh@296 -- # x722=() 00:17:29.559 13:59:20 -- nvmf/common.sh@296 -- # local -ga x722 00:17:29.559 13:59:20 -- nvmf/common.sh@297 -- # mlx=() 00:17:29.559 13:59:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:29.559 13:59:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.559 13:59:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:29.559 13:59:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:29.559 13:59:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:29.559 13:59:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:29.559 13:59:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:29.559 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:29.559 13:59:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:29.559 13:59:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:29.559 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:29.559 13:59:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:29.559 13:59:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:29.559 13:59:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.559 13:59:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:29.559 13:59:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.559 13:59:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:29.559 Found net devices under 0000:86:00.0: cvl_0_0 00:17:29.559 13:59:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.559 13:59:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:29.559 13:59:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.559 13:59:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:29.559 13:59:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.559 13:59:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:29.559 Found net devices under 0000:86:00.1: cvl_0_1 00:17:29.559 13:59:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.559 13:59:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:29.559 13:59:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:29.559 13:59:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:29.559 13:59:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.559 13:59:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.559 13:59:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.559 13:59:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:29.559 13:59:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.559 13:59:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.559 13:59:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:29.559 13:59:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.559 13:59:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.559 13:59:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:29.559 13:59:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:29.559 13:59:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.559 13:59:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.559 13:59:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.559 13:59:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.559 13:59:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:29.559 13:59:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.559 13:59:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.559 13:59:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.559 13:59:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:29.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:17:29.559 00:17:29.559 --- 10.0.0.2 ping statistics --- 00:17:29.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.559 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:17:29.559 13:59:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:17:29.559 00:17:29.559 --- 10.0.0.1 ping statistics --- 00:17:29.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.559 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:17:29.559 13:59:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.559 13:59:20 -- nvmf/common.sh@410 -- # return 0 00:17:29.559 13:59:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:29.559 13:59:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.559 13:59:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:29.559 13:59:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.559 13:59:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:29.559 13:59:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:29.559 13:59:20 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:29.559 13:59:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.559 13:59:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:29.559 13:59:20 -- common/autotest_common.sh@10 -- # set +x 00:17:29.559 13:59:20 -- nvmf/common.sh@469 -- # nvmfpid=3263809 00:17:29.559 13:59:20 -- nvmf/common.sh@470 -- # waitforlisten 3263809 00:17:29.559 13:59:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:29.560 13:59:20 -- common/autotest_common.sh@819 -- # '[' -z 3263809 ']' 00:17:29.560 13:59:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.560 13:59:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.560 13:59:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.560 13:59:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.560 13:59:20 -- common/autotest_common.sh@10 -- # set +x 00:17:29.818 [2024-07-23 13:59:20.615397] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:29.818 [2024-07-23 13:59:20.615441] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.818 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.818 [2024-07-23 13:59:20.674480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:29.818 [2024-07-23 13:59:20.751323] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:29.818 [2024-07-23 13:59:20.751438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.818 [2024-07-23 13:59:20.751446] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.818 [2024-07-23 13:59:20.751452] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.818 [2024-07-23 13:59:20.751493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.818 [2024-07-23 13:59:20.751595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.818 [2024-07-23 13:59:20.751683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.818 [2024-07-23 13:59:20.751684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.760 13:59:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.760 13:59:21 -- common/autotest_common.sh@852 -- # return 0 00:17:30.760 13:59:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.760 13:59:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.760 13:59:21 -- common/autotest_common.sh@10 -- # set +x 00:17:30.760 13:59:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.760 13:59:21 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:30.760 [2024-07-23 13:59:21.598758] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.760 13:59:21 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.052 13:59:21 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:31.052 13:59:21 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.052 13:59:22 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:31.052 13:59:22 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.310 13:59:22 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:31.310 13:59:22 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.570 13:59:22 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:31.570 13:59:22 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:31.570 13:59:22 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:31.828 13:59:22 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:31.828 13:59:22 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.088 13:59:22 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:32.088 13:59:22 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:32.346 13:59:23 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:32.346 13:59:23 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:32.346 13:59:23 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:32.606 13:59:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:32.606 13:59:23 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:32.864 13:59:23 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:32.864 13:59:23 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:32.864 13:59:23 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.122 [2024-07-23 13:59:24.005333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.122 13:59:24 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:33.381 13:59:24 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:33.381 13:59:24 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:34.759 13:59:25 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:34.759 13:59:25 -- common/autotest_common.sh@1177 -- # local i=0 00:17:34.759 13:59:25 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.759 13:59:25 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:17:34.759 13:59:25 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:17:34.759 13:59:25 -- common/autotest_common.sh@1184 -- # sleep 2 00:17:36.666 13:59:27 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:17:36.666 13:59:27 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:17:36.666 13:59:27 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.666 13:59:27 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:17:36.666 13:59:27 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.666 13:59:27 -- common/autotest_common.sh@1187 -- # return 0 00:17:36.666 13:59:27 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:36.666 [global] 00:17:36.666 thread=1 00:17:36.666 invalidate=1 00:17:36.666 rw=write 00:17:36.666 time_based=1 00:17:36.666 runtime=1 00:17:36.666 ioengine=libaio 00:17:36.666 direct=1 00:17:36.666 bs=4096 00:17:36.666 iodepth=1 00:17:36.666 norandommap=0 00:17:36.666 numjobs=1 00:17:36.666 00:17:36.666 verify_dump=1 00:17:36.666 verify_backlog=512 00:17:36.666 verify_state_save=0 00:17:36.666 do_verify=1 00:17:36.666 verify=crc32c-intel 00:17:36.666 [job0] 00:17:36.666 filename=/dev/nvme0n1 00:17:36.666 [job1] 00:17:36.666 filename=/dev/nvme0n2 00:17:36.666 [job2] 00:17:36.666 filename=/dev/nvme0n3 00:17:36.666 [job3] 00:17:36.666 filename=/dev/nvme0n4 00:17:36.666 Could not set queue depth (nvme0n1) 00:17:36.666 Could not set queue depth (nvme0n2) 00:17:36.666 Could not set queue depth (nvme0n3) 00:17:36.666 Could not set queue depth (nvme0n4) 00:17:36.927 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.927 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.927 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.927 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.927 fio-3.35 00:17:36.927 Starting 4 threads 00:17:38.307 00:17:38.307 job0: (groupid=0, jobs=1): err= 0: pid=3265223: Tue Jul 23 13:59:29 2024 00:17:38.307 read: IOPS=1003, BW=4016KiB/s (4112kB/s)(4104KiB/1022msec) 00:17:38.307 slat (nsec): min=6871, max=21912, avg=8098.06, stdev=1006.45 00:17:38.307 clat (usec): min=382, max=42131, avg=598.59, stdev=1807.38 00:17:38.307 lat (usec): min=389, max=42152, avg=606.69, stdev=1807.71 00:17:38.307 clat percentiles (usec): 00:17:38.307 | 1.00th=[ 396], 5.00th=[ 416], 10.00th=[ 449], 20.00th=[ 498], 00:17:38.307 | 30.00th=[ 515], 40.00th=[ 523], 50.00th=[ 529], 60.00th=[ 537], 00:17:38.307 | 70.00th=[ 537], 80.00th=[ 545], 90.00th=[ 553], 95.00th=[ 562], 00:17:38.307 | 99.00th=[ 594], 99.50th=[ 766], 99.90th=[40633], 99.95th=[42206], 00:17:38.307 | 99.99th=[42206] 00:17:38.307 write: IOPS=1502, BW=6012KiB/s (6156kB/s)(6144KiB/1022msec); 0 zone resets 00:17:38.307 slat (nsec): min=10285, max=66002, avg=11679.45, stdev=2171.23 00:17:38.307 clat (usec): min=195, max=696, avg=242.90, stdev=61.38 00:17:38.307 lat (usec): min=206, max=738, avg=254.58, stdev=61.97 00:17:38.307 clat percentiles (usec): 00:17:38.307 | 1.00th=[ 200], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 212], 00:17:38.307 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 231], 00:17:38.307 | 70.00th=[ 239], 80.00th=[ 255], 90.00th=[ 293], 95.00th=[ 326], 00:17:38.307 | 99.00th=[ 523], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 693], 00:17:38.307 | 99.99th=[ 693] 00:17:38.307 bw ( KiB/s): min= 5624, max= 6650, per=34.03%, avg=6137.00, stdev=725.49, samples=2 00:17:38.307 iops : min= 1406, max= 1662, avg=1534.00, stdev=181.02, samples=2 00:17:38.307 lat (usec) : 250=46.37%, 500=21.04%, 750=32.28%, 1000=0.20% 00:17:38.307 lat (msec) : 2=0.04%, 50=0.08% 00:17:38.307 cpu : usr=2.25%, sys=3.82%, ctx=2563, majf=0, minf=2 00:17:38.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.307 issued rwts: total=1026,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.307 job1: (groupid=0, jobs=1): err= 0: pid=3265224: Tue Jul 23 13:59:29 2024 00:17:38.307 read: IOPS=1278, BW=5115KiB/s (5238kB/s)(5120KiB/1001msec) 00:17:38.307 slat (nsec): min=6885, max=36245, avg=8017.60, stdev=1598.50 00:17:38.307 clat (usec): min=382, max=768, avg=466.85, stdev=23.69 00:17:38.307 lat (usec): min=389, max=776, avg=474.86, stdev=23.72 00:17:38.307 clat percentiles (usec): 00:17:38.307 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 437], 20.00th=[ 457], 00:17:38.307 | 30.00th=[ 461], 40.00th=[ 469], 50.00th=[ 469], 60.00th=[ 474], 00:17:38.307 | 70.00th=[ 478], 80.00th=[ 482], 90.00th=[ 490], 95.00th=[ 494], 00:17:38.307 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 519], 99.95th=[ 766], 00:17:38.307 | 99.99th=[ 766] 00:17:38.307 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:38.307 slat (nsec): min=10257, max=41989, avg=11587.43, stdev=1862.35 00:17:38.307 clat (usec): min=190, max=699, avg=238.37, stdev=56.90 00:17:38.307 lat (usec): min=202, max=713, avg=249.96, stdev=57.37 00:17:38.307 clat percentiles (usec): 00:17:38.307 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:17:38.307 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 227], 00:17:38.307 | 70.00th=[ 235], 80.00th=[ 249], 90.00th=[ 281], 95.00th=[ 338], 00:17:38.307 | 99.00th=[ 510], 99.50th=[ 644], 99.90th=[ 676], 99.95th=[ 701], 00:17:38.307 | 99.99th=[ 701] 00:17:38.307 bw ( KiB/s): min= 7744, max= 7744, per=42.94%, avg=7744.00, stdev= 0.00, samples=1 00:17:38.307 iops : min= 1936, max= 1936, avg=1936.00, stdev= 0.00, samples=1 00:17:38.307 lat (usec) : 250=44.14%, 500=54.62%, 750=1.21%, 1000=0.04% 00:17:38.307 cpu : usr=2.50%, sys=4.30%, ctx=2816, majf=0, minf=1 00:17:38.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.307 issued rwts: total=1280,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.307 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.307 job2: (groupid=0, jobs=1): err= 0: pid=3265225: Tue Jul 23 13:59:29 2024 00:17:38.307 read: IOPS=76, BW=306KiB/s (313kB/s)(312KiB/1021msec) 00:17:38.307 slat (nsec): min=6570, max=24194, avg=10894.38, stdev=6090.96 00:17:38.307 clat (usec): min=598, max=42970, avg=11189.28, stdev=17911.06 00:17:38.307 lat (usec): min=605, max=42993, avg=11200.18, stdev=17915.87 00:17:38.307 clat percentiles (usec): 00:17:38.307 | 1.00th=[ 603], 5.00th=[ 619], 10.00th=[ 652], 20.00th=[ 660], 00:17:38.307 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 840], 60.00th=[ 865], 00:17:38.307 | 70.00th=[ 930], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:38.307 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:17:38.307 | 99.99th=[42730] 00:17:38.307 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:17:38.307 slat (nsec): min=9861, max=43666, avg=11232.33, stdev=2014.82 00:17:38.307 clat (usec): min=212, max=677, avg=273.95, stdev=72.96 00:17:38.307 lat (usec): min=223, max=720, avg=285.18, stdev=73.38 00:17:38.307 clat percentiles (usec): 00:17:38.307 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:17:38.307 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 245], 60.00th=[ 253], 00:17:38.307 | 70.00th=[ 277], 80.00th=[ 297], 90.00th=[ 379], 95.00th=[ 445], 00:17:38.307 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 676], 99.95th=[ 676], 00:17:38.307 | 99.99th=[ 676] 00:17:38.307 bw ( KiB/s): min= 4087, max= 4087, per=22.66%, avg=4087.00, stdev= 0.00, samples=1 00:17:38.307 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:17:38.307 lat (usec) : 250=50.00%, 500=34.07%, 750=7.80%, 1000=4.58% 00:17:38.307 lat (msec) : 2=0.17%, 50=3.39% 00:17:38.307 cpu : usr=0.39%, sys=0.49%, ctx=592, majf=0, minf=1 00:17:38.307 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.308 issued rwts: total=78,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.308 job3: (groupid=0, jobs=1): err= 0: pid=3265226: Tue Jul 23 13:59:29 2024 00:17:38.308 read: IOPS=816, BW=3265KiB/s (3343kB/s)(3268KiB/1001msec) 00:17:38.308 slat (nsec): min=4697, max=20461, avg=5961.88, stdev=887.36 00:17:38.308 clat (usec): min=389, max=2425, avg=618.58, stdev=115.80 00:17:38.308 lat (usec): min=395, max=2433, avg=624.54, stdev=115.90 00:17:38.308 clat percentiles (usec): 00:17:38.308 | 1.00th=[ 449], 5.00th=[ 474], 10.00th=[ 490], 20.00th=[ 519], 00:17:38.308 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 611], 60.00th=[ 635], 00:17:38.308 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 742], 95.00th=[ 783], 00:17:38.308 | 99.00th=[ 865], 99.50th=[ 938], 99.90th=[ 2442], 99.95th=[ 2442], 00:17:38.308 | 99.99th=[ 2442] 00:17:38.308 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:38.308 slat (usec): min=6, max=152, avg= 9.13, stdev= 7.49 00:17:38.308 clat (usec): min=241, max=1415, avg=465.28, stdev=119.78 00:17:38.308 lat (usec): min=247, max=1432, avg=474.41, stdev=121.42 00:17:38.308 clat percentiles (usec): 00:17:38.308 | 1.00th=[ 247], 5.00th=[ 277], 10.00th=[ 310], 20.00th=[ 392], 00:17:38.308 | 30.00th=[ 445], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 474], 00:17:38.308 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[ 586], 95.00th=[ 685], 00:17:38.308 | 99.00th=[ 873], 99.50th=[ 1045], 99.90th=[ 1188], 99.95th=[ 1418], 00:17:38.308 | 99.99th=[ 1418] 00:17:38.308 bw ( KiB/s): min= 4096, max= 4096, per=22.71%, avg=4096.00, stdev= 0.00, samples=1 00:17:38.308 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:38.308 lat (usec) : 250=0.71%, 500=52.15%, 750=42.75%, 1000=3.97% 00:17:38.308 lat (msec) : 2=0.38%, 4=0.05% 00:17:38.308 cpu : usr=0.80%, sys=2.20%, ctx=1841, majf=0, minf=1 00:17:38.308 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:38.308 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.308 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:38.308 issued rwts: total=817,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:38.308 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:38.308 00:17:38.308 Run status group 0 (all jobs): 00:17:38.308 READ: bw=12.2MiB/s (12.8MB/s), 306KiB/s-5115KiB/s (313kB/s-5238kB/s), io=12.5MiB (13.1MB), run=1001-1022msec 00:17:38.308 WRITE: bw=17.6MiB/s (18.5MB/s), 2006KiB/s-6138KiB/s (2054kB/s-6285kB/s), io=18.0MiB (18.9MB), run=1001-1022msec 00:17:38.308 00:17:38.308 Disk stats (read/write): 00:17:38.308 nvme0n1: ios=1074/1167, merge=0/0, ticks=591/274, in_queue=865, util=88.06% 00:17:38.308 nvme0n2: ios=1056/1423, merge=0/0, ticks=517/321, in_queue=838, util=88.40% 00:17:38.308 nvme0n3: ios=42/512, merge=0/0, ticks=1635/139, in_queue=1774, util=98.75% 00:17:38.308 nvme0n4: ios=602/1024, merge=0/0, ticks=678/466, in_queue=1144, util=93.07% 00:17:38.308 13:59:29 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:38.308 [global] 00:17:38.308 thread=1 00:17:38.308 invalidate=1 00:17:38.308 rw=randwrite 00:17:38.308 time_based=1 00:17:38.308 runtime=1 00:17:38.308 ioengine=libaio 00:17:38.308 direct=1 00:17:38.308 bs=4096 00:17:38.308 iodepth=1 00:17:38.308 norandommap=0 00:17:38.308 numjobs=1 00:17:38.308 00:17:38.308 verify_dump=1 00:17:38.308 verify_backlog=512 00:17:38.308 verify_state_save=0 00:17:38.308 do_verify=1 00:17:38.308 verify=crc32c-intel 00:17:38.308 [job0] 00:17:38.308 filename=/dev/nvme0n1 00:17:38.308 [job1] 00:17:38.308 filename=/dev/nvme0n2 00:17:38.308 [job2] 00:17:38.308 filename=/dev/nvme0n3 00:17:38.308 [job3] 00:17:38.308 filename=/dev/nvme0n4 00:17:38.308 Could not set queue depth (nvme0n1) 00:17:38.308 Could not set queue depth (nvme0n2) 00:17:38.308 Could not set queue depth (nvme0n3) 00:17:38.308 Could not set queue depth (nvme0n4) 00:17:38.567 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.568 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.568 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.568 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:38.568 fio-3.35 00:17:38.568 Starting 4 threads 00:17:39.949 00:17:39.949 job0: (groupid=0, jobs=1): err= 0: pid=3265607: Tue Jul 23 13:59:30 2024 00:17:39.949 read: IOPS=1156, BW=4627KiB/s (4738kB/s)(4632KiB/1001msec) 00:17:39.949 slat (nsec): min=3640, max=40758, avg=7054.91, stdev=2365.85 00:17:39.949 clat (usec): min=294, max=2250, avg=504.55, stdev=94.99 00:17:39.949 lat (usec): min=302, max=2255, avg=511.60, stdev=95.18 00:17:39.949 clat percentiles (usec): 00:17:39.949 | 1.00th=[ 318], 5.00th=[ 400], 10.00th=[ 433], 20.00th=[ 449], 00:17:39.949 | 30.00th=[ 461], 40.00th=[ 474], 50.00th=[ 486], 60.00th=[ 506], 00:17:39.949 | 70.00th=[ 537], 80.00th=[ 570], 90.00th=[ 611], 95.00th=[ 619], 00:17:39.949 | 99.00th=[ 758], 99.50th=[ 840], 99.90th=[ 1123], 99.95th=[ 2245], 00:17:39.949 | 99.99th=[ 2245] 00:17:39.949 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:17:39.949 slat (nsec): min=4497, max=47737, avg=10073.11, stdev=3481.60 00:17:39.949 clat (usec): min=176, max=815, avg=250.90, stdev=77.53 00:17:39.949 lat (usec): min=182, max=848, avg=260.97, stdev=77.96 00:17:39.949 clat percentiles (usec): 00:17:39.949 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:17:39.949 | 30.00th=[ 206], 40.00th=[ 210], 50.00th=[ 217], 60.00th=[ 227], 00:17:39.949 | 70.00th=[ 253], 80.00th=[ 289], 90.00th=[ 355], 95.00th=[ 437], 00:17:39.949 | 99.00th=[ 506], 99.50th=[ 510], 99.90th=[ 627], 99.95th=[ 816], 00:17:39.949 | 99.99th=[ 816] 00:17:39.949 bw ( KiB/s): min= 7032, max= 7032, per=58.26%, avg=7032.00, stdev= 0.00, samples=1 00:17:39.949 iops : min= 1758, max= 1758, avg=1758.00, stdev= 0.00, samples=1 00:17:39.949 lat (usec) : 250=38.83%, 500=41.65%, 750=19.01%, 1000=0.41% 00:17:39.949 lat (msec) : 2=0.07%, 4=0.04% 00:17:39.949 cpu : usr=2.60%, sys=2.60%, ctx=2697, majf=0, minf=1 00:17:39.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.949 issued rwts: total=1158,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.949 job1: (groupid=0, jobs=1): err= 0: pid=3265608: Tue Jul 23 13:59:30 2024 00:17:39.949 read: IOPS=20, BW=83.3KiB/s (85.3kB/s)(84.0KiB/1008msec) 00:17:39.949 slat (nsec): min=10401, max=23544, avg=19534.10, stdev=4287.47 00:17:39.949 clat (usec): min=1543, max=42467, avg=39654.93, stdev=8746.19 00:17:39.949 lat (usec): min=1559, max=42489, avg=39674.47, stdev=8747.05 00:17:39.949 clat percentiles (usec): 00:17:39.949 | 1.00th=[ 1549], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:39.949 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:17:39.949 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:39.949 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:39.949 | 99.99th=[42206] 00:17:39.949 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:17:39.949 slat (nsec): min=3547, max=36828, avg=10209.70, stdev=3944.86 00:17:39.949 clat (usec): min=183, max=1754, avg=326.99, stdev=133.72 00:17:39.949 lat (usec): min=187, max=1758, avg=337.20, stdev=133.47 00:17:39.949 clat percentiles (usec): 00:17:39.949 | 1.00th=[ 190], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 235], 00:17:39.949 | 30.00th=[ 249], 40.00th=[ 262], 50.00th=[ 285], 60.00th=[ 322], 00:17:39.949 | 70.00th=[ 367], 80.00th=[ 408], 90.00th=[ 437], 95.00th=[ 619], 00:17:39.949 | 99.00th=[ 660], 99.50th=[ 971], 99.90th=[ 1762], 99.95th=[ 1762], 00:17:39.949 | 99.99th=[ 1762] 00:17:39.949 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:39.949 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:39.949 lat (usec) : 250=30.21%, 500=59.85%, 750=5.44%, 1000=0.19% 00:17:39.949 lat (msec) : 2=0.56%, 50=3.75% 00:17:39.949 cpu : usr=0.30%, sys=0.89%, ctx=534, majf=0, minf=1 00:17:39.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.950 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.950 job2: (groupid=0, jobs=1): err= 0: pid=3265609: Tue Jul 23 13:59:30 2024 00:17:39.950 read: IOPS=20, BW=83.8KiB/s (85.8kB/s)(84.0KiB/1002msec) 00:17:39.950 slat (nsec): min=10515, max=26515, avg=20730.95, stdev=3503.61 00:17:39.950 clat (usec): min=674, max=42119, avg=39841.89, stdev=8980.43 00:17:39.950 lat (usec): min=695, max=42141, avg=39862.63, stdev=8980.28 00:17:39.950 clat percentiles (usec): 00:17:39.950 | 1.00th=[ 676], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:39.950 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:17:39.950 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:39.950 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:39.950 | 99.99th=[42206] 00:17:39.950 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:39.950 slat (nsec): min=10351, max=45713, avg=11975.59, stdev=2856.65 00:17:39.950 clat (usec): min=214, max=816, avg=306.80, stdev=97.18 00:17:39.950 lat (usec): min=225, max=856, avg=318.78, stdev=98.20 00:17:39.950 clat percentiles (usec): 00:17:39.950 | 1.00th=[ 221], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 237], 00:17:39.950 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 277], 00:17:39.950 | 70.00th=[ 314], 80.00th=[ 404], 90.00th=[ 494], 95.00th=[ 515], 00:17:39.950 | 99.00th=[ 545], 99.50th=[ 603], 99.90th=[ 816], 99.95th=[ 816], 00:17:39.950 | 99.99th=[ 816] 00:17:39.950 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:39.950 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:39.950 lat (usec) : 250=36.96%, 500=51.41%, 750=7.69%, 1000=0.19% 00:17:39.950 lat (msec) : 50=3.75% 00:17:39.950 cpu : usr=0.10%, sys=1.30%, ctx=533, majf=0, minf=2 00:17:39.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.950 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.950 job3: (groupid=0, jobs=1): err= 0: pid=3265610: Tue Jul 23 13:59:30 2024 00:17:39.950 read: IOPS=19, BW=78.6KiB/s (80.5kB/s)(80.0KiB/1018msec) 00:17:39.950 slat (nsec): min=10746, max=24397, avg=21986.80, stdev=2900.87 00:17:39.950 clat (usec): min=40875, max=42228, avg=41487.69, stdev=485.53 00:17:39.950 lat (usec): min=40897, max=42252, avg=41509.68, stdev=486.24 00:17:39.950 clat percentiles (usec): 00:17:39.950 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:17:39.950 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:17:39.950 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:39.950 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:39.950 | 99.99th=[42206] 00:17:39.950 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:17:39.950 slat (nsec): min=10983, max=39438, avg=12526.42, stdev=1974.25 00:17:39.950 clat (usec): min=269, max=681, avg=350.87, stdev=74.28 00:17:39.950 lat (usec): min=281, max=712, avg=363.40, stdev=75.00 00:17:39.950 clat percentiles (usec): 00:17:39.950 | 1.00th=[ 277], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:17:39.950 | 30.00th=[ 293], 40.00th=[ 310], 50.00th=[ 326], 60.00th=[ 343], 00:17:39.950 | 70.00th=[ 379], 80.00th=[ 408], 90.00th=[ 494], 95.00th=[ 510], 00:17:39.950 | 99.00th=[ 523], 99.50th=[ 529], 99.90th=[ 685], 99.95th=[ 685], 00:17:39.950 | 99.99th=[ 685] 00:17:39.950 bw ( KiB/s): min= 4096, max= 4096, per=33.93%, avg=4096.00, stdev= 0.00, samples=1 00:17:39.950 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:39.950 lat (usec) : 500=88.16%, 750=8.08% 00:17:39.950 lat (msec) : 50=3.76% 00:17:39.950 cpu : usr=0.20%, sys=1.18%, ctx=533, majf=0, minf=1 00:17:39.950 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:39.950 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.950 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:39.950 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:39.950 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:39.950 00:17:39.950 Run status group 0 (all jobs): 00:17:39.950 READ: bw=4794KiB/s (4909kB/s), 78.6KiB/s-4627KiB/s (80.5kB/s-4738kB/s), io=4880KiB (4997kB), run=1001-1018msec 00:17:39.950 WRITE: bw=11.8MiB/s (12.4MB/s), 2012KiB/s-6138KiB/s (2060kB/s-6285kB/s), io=12.0MiB (12.6MB), run=1001-1018msec 00:17:39.950 00:17:39.950 Disk stats (read/write): 00:17:39.950 nvme0n1: ios=1060/1070, merge=0/0, ticks=1124/258, in_queue=1382, util=96.99% 00:17:39.950 nvme0n2: ios=45/512, merge=0/0, ticks=866/164, in_queue=1030, util=99.49% 00:17:39.950 nvme0n3: ios=16/512, merge=0/0, ticks=627/154, in_queue=781, util=87.38% 00:17:39.950 nvme0n4: ios=71/512, merge=0/0, ticks=784/176, in_queue=960, util=97.56% 00:17:39.950 13:59:30 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:39.950 [global] 00:17:39.950 thread=1 00:17:39.950 invalidate=1 00:17:39.950 rw=write 00:17:39.950 time_based=1 00:17:39.950 runtime=1 00:17:39.950 ioengine=libaio 00:17:39.950 direct=1 00:17:39.950 bs=4096 00:17:39.950 iodepth=128 00:17:39.950 norandommap=0 00:17:39.950 numjobs=1 00:17:39.950 00:17:39.950 verify_dump=1 00:17:39.950 verify_backlog=512 00:17:39.950 verify_state_save=0 00:17:39.950 do_verify=1 00:17:39.950 verify=crc32c-intel 00:17:39.950 [job0] 00:17:39.950 filename=/dev/nvme0n1 00:17:39.950 [job1] 00:17:39.950 filename=/dev/nvme0n2 00:17:39.950 [job2] 00:17:39.950 filename=/dev/nvme0n3 00:17:39.950 [job3] 00:17:39.950 filename=/dev/nvme0n4 00:17:39.950 Could not set queue depth (nvme0n1) 00:17:39.950 Could not set queue depth (nvme0n2) 00:17:39.950 Could not set queue depth (nvme0n3) 00:17:39.950 Could not set queue depth (nvme0n4) 00:17:40.208 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.208 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.208 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.208 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:40.208 fio-3.35 00:17:40.208 Starting 4 threads 00:17:41.623 00:17:41.623 job0: (groupid=0, jobs=1): err= 0: pid=3265976: Tue Jul 23 13:59:32 2024 00:17:41.623 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:17:41.623 slat (nsec): min=1341, max=11471k, avg=84674.68, stdev=529956.60 00:17:41.623 clat (usec): min=3178, max=24142, avg=11163.36, stdev=3499.35 00:17:41.623 lat (usec): min=3186, max=24152, avg=11248.04, stdev=3519.65 00:17:41.623 clat percentiles (usec): 00:17:41.624 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 8225], 00:17:41.624 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10421], 60.00th=[11469], 00:17:41.624 | 70.00th=[12649], 80.00th=[13960], 90.00th=[15795], 95.00th=[17171], 00:17:41.624 | 99.00th=[21890], 99.50th=[23987], 99.90th=[23987], 99.95th=[24249], 00:17:41.624 | 99.99th=[24249] 00:17:41.624 write: IOPS=5395, BW=21.1MiB/s (22.1MB/s)(21.3MiB/1010msec); 0 zone resets 00:17:41.624 slat (usec): min=2, max=7410, avg=99.39, stdev=417.15 00:17:41.624 clat (usec): min=2034, max=23222, avg=12925.12, stdev=3709.68 00:17:41.624 lat (usec): min=3095, max=23226, avg=13024.51, stdev=3722.71 00:17:41.624 clat percentiles (usec): 00:17:41.624 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 7701], 20.00th=[ 9503], 00:17:41.624 | 30.00th=[10945], 40.00th=[12387], 50.00th=[13566], 60.00th=[14222], 00:17:41.624 | 70.00th=[15008], 80.00th=[16057], 90.00th=[17433], 95.00th=[18482], 00:17:41.624 | 99.00th=[20317], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:17:41.624 | 99.99th=[23200] 00:17:41.624 bw ( KiB/s): min=21024, max=21552, per=31.48%, avg=21288.00, stdev=373.35, samples=2 00:17:41.624 iops : min= 5256, max= 5388, avg=5322.00, stdev=93.34, samples=2 00:17:41.624 lat (msec) : 4=0.24%, 10=33.82%, 20=64.23%, 50=1.72% 00:17:41.624 cpu : usr=3.47%, sys=4.16%, ctx=734, majf=0, minf=1 00:17:41.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:41.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.624 issued rwts: total=5120,5449,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.624 job1: (groupid=0, jobs=1): err= 0: pid=3265977: Tue Jul 23 13:59:32 2024 00:17:41.624 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:17:41.624 slat (nsec): min=1149, max=27206k, avg=90080.59, stdev=661556.99 00:17:41.624 clat (usec): min=1491, max=46541, avg=12043.98, stdev=5330.57 00:17:41.624 lat (usec): min=1569, max=46547, avg=12134.06, stdev=5362.68 00:17:41.624 clat percentiles (usec): 00:17:41.624 | 1.00th=[ 4424], 5.00th=[ 7570], 10.00th=[ 8094], 20.00th=[ 8848], 00:17:41.624 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11207], 60.00th=[11863], 00:17:41.624 | 70.00th=[12649], 80.00th=[13829], 90.00th=[16450], 95.00th=[18744], 00:17:41.624 | 99.00th=[41157], 99.50th=[45351], 99.90th=[46400], 99.95th=[46400], 00:17:41.624 | 99.99th=[46400] 00:17:41.624 write: IOPS=3638, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1007msec); 0 zone resets 00:17:41.624 slat (usec): min=2, max=47165, avg=176.12, stdev=1867.38 00:17:41.624 clat (msec): min=2, max=246, avg=17.47, stdev=18.01 00:17:41.624 lat (msec): min=3, max=246, avg=17.64, stdev=18.40 00:17:41.624 clat percentiles (msec): 00:17:41.624 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 9], 20.00th=[ 10], 00:17:41.624 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 14], 60.00th=[ 15], 00:17:41.624 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 26], 95.00th=[ 44], 00:17:41.624 | 99.00th=[ 86], 99.50th=[ 127], 99.90th=[ 218], 99.95th=[ 247], 00:17:41.624 | 99.99th=[ 247] 00:17:41.624 bw ( KiB/s): min=11208, max=17464, per=21.20%, avg=14336.00, stdev=4423.66, samples=2 00:17:41.624 iops : min= 2802, max= 4366, avg=3584.00, stdev=1105.92, samples=2 00:17:41.624 lat (msec) : 2=0.18%, 4=0.41%, 10=30.49%, 20=57.37%, 50=10.67% 00:17:41.624 lat (msec) : 100=0.44%, 250=0.44% 00:17:41.624 cpu : usr=2.09%, sys=2.88%, ctx=535, majf=0, minf=1 00:17:41.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:41.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.624 issued rwts: total=3584,3664,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.624 job2: (groupid=0, jobs=1): err= 0: pid=3265978: Tue Jul 23 13:59:32 2024 00:17:41.624 read: IOPS=4027, BW=15.7MiB/s (16.5MB/s)(16.0MiB/1017msec) 00:17:41.624 slat (nsec): min=1502, max=17789k, avg=123564.45, stdev=914408.50 00:17:41.624 clat (usec): min=5720, max=41504, avg=16301.13, stdev=6039.85 00:17:41.624 lat (usec): min=6114, max=41533, avg=16424.69, stdev=6087.90 00:17:41.624 clat percentiles (usec): 00:17:41.624 | 1.00th=[ 6587], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[10159], 00:17:41.624 | 30.00th=[12518], 40.00th=[13566], 50.00th=[15401], 60.00th=[17171], 00:17:41.624 | 70.00th=[19006], 80.00th=[21890], 90.00th=[25035], 95.00th=[27919], 00:17:41.624 | 99.00th=[30278], 99.50th=[38536], 99.90th=[38536], 99.95th=[39060], 00:17:41.624 | 99.99th=[41681] 00:17:41.624 write: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(17.6MiB/1017msec); 0 zone resets 00:17:41.624 slat (usec): min=2, max=11843, avg=105.32, stdev=580.77 00:17:41.624 clat (usec): min=1732, max=33709, avg=13851.05, stdev=4643.30 00:17:41.624 lat (usec): min=1747, max=33713, avg=13956.38, stdev=4655.48 00:17:41.624 clat percentiles (usec): 00:17:41.624 | 1.00th=[ 4686], 5.00th=[ 6980], 10.00th=[ 8586], 20.00th=[ 9896], 00:17:41.624 | 30.00th=[11207], 40.00th=[11994], 50.00th=[13698], 60.00th=[14353], 00:17:41.624 | 70.00th=[16319], 80.00th=[17957], 90.00th=[19530], 95.00th=[21890], 00:17:41.624 | 99.00th=[27919], 99.50th=[30016], 99.90th=[33817], 99.95th=[33817], 00:17:41.624 | 99.99th=[33817] 00:17:41.624 bw ( KiB/s): min=16560, max=18368, per=25.83%, avg=17464.00, stdev=1278.45, samples=2 00:17:41.624 iops : min= 4140, max= 4592, avg=4366.00, stdev=319.61, samples=2 00:17:41.624 lat (msec) : 2=0.03%, 4=0.22%, 10=19.35%, 20=63.05%, 50=17.35% 00:17:41.624 cpu : usr=2.76%, sys=4.53%, ctx=622, majf=0, minf=1 00:17:41.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:41.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.624 issued rwts: total=4096,4494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.624 job3: (groupid=0, jobs=1): err= 0: pid=3265979: Tue Jul 23 13:59:32 2024 00:17:41.624 read: IOPS=3406, BW=13.3MiB/s (14.0MB/s)(13.5MiB/1013msec) 00:17:41.624 slat (nsec): min=1139, max=81001k, avg=145100.11, stdev=1687503.52 00:17:41.624 clat (usec): min=1374, max=116952, avg=20498.10, stdev=17395.97 00:17:41.624 lat (msec): min=3, max=116, avg=20.64, stdev=17.52 00:17:41.624 clat percentiles (msec): 00:17:41.624 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 12], 00:17:41.624 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:17:41.624 | 70.00th=[ 20], 80.00th=[ 22], 90.00th=[ 34], 95.00th=[ 44], 00:17:41.624 | 99.00th=[ 101], 99.50th=[ 101], 99.90th=[ 101], 99.95th=[ 115], 00:17:41.624 | 99.99th=[ 117] 00:17:41.624 write: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec); 0 zone resets 00:17:41.624 slat (nsec): min=1946, max=15324k, avg=104962.01, stdev=754403.51 00:17:41.624 clat (usec): min=2448, max=57884, avg=15914.78, stdev=7905.29 00:17:41.624 lat (usec): min=2472, max=57896, avg=16019.75, stdev=7940.72 00:17:41.624 clat percentiles (usec): 00:17:41.624 | 1.00th=[ 3949], 5.00th=[ 6783], 10.00th=[ 9110], 20.00th=[10421], 00:17:41.624 | 30.00th=[11338], 40.00th=[12387], 50.00th=[13960], 60.00th=[16057], 00:17:41.624 | 70.00th=[17957], 80.00th=[20317], 90.00th=[23987], 95.00th=[29492], 00:17:41.624 | 99.00th=[46400], 99.50th=[50594], 99.90th=[57934], 99.95th=[57934], 00:17:41.624 | 99.99th=[57934] 00:17:41.624 bw ( KiB/s): min=12288, max=16384, per=21.20%, avg=14336.00, stdev=2896.31, samples=2 00:17:41.624 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:41.624 lat (msec) : 2=0.01%, 4=0.94%, 10=10.22%, 20=65.60%, 50=20.98% 00:17:41.624 lat (msec) : 100=1.32%, 250=0.92% 00:17:41.624 cpu : usr=2.37%, sys=4.05%, ctx=354, majf=0, minf=1 00:17:41.624 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:41.624 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:41.624 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:41.624 issued rwts: total=3451,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:41.624 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:41.624 00:17:41.624 Run status group 0 (all jobs): 00:17:41.624 READ: bw=62.4MiB/s (65.5MB/s), 13.3MiB/s-19.8MiB/s (14.0MB/s-20.8MB/s), io=63.5MiB (66.6MB), run=1007-1017msec 00:17:41.624 WRITE: bw=66.0MiB/s (69.2MB/s), 13.8MiB/s-21.1MiB/s (14.5MB/s-22.1MB/s), io=67.2MiB (70.4MB), run=1007-1017msec 00:17:41.624 00:17:41.624 Disk stats (read/write): 00:17:41.624 nvme0n1: ios=4227/4608, merge=0/0, ticks=47015/54935, in_queue=101950, util=96.59% 00:17:41.625 nvme0n2: ios=2590/2799, merge=0/0, ticks=21089/23128, in_queue=44217, util=98.78% 00:17:41.625 nvme0n3: ios=3627/3660, merge=0/0, ticks=60219/45571, in_queue=105790, util=96.78% 00:17:41.625 nvme0n4: ios=3130/3584, merge=0/0, ticks=48759/45762, in_queue=94521, util=98.74% 00:17:41.625 13:59:32 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:41.625 [global] 00:17:41.625 thread=1 00:17:41.625 invalidate=1 00:17:41.625 rw=randwrite 00:17:41.625 time_based=1 00:17:41.625 runtime=1 00:17:41.625 ioengine=libaio 00:17:41.625 direct=1 00:17:41.625 bs=4096 00:17:41.625 iodepth=128 00:17:41.625 norandommap=0 00:17:41.625 numjobs=1 00:17:41.625 00:17:41.625 verify_dump=1 00:17:41.625 verify_backlog=512 00:17:41.625 verify_state_save=0 00:17:41.625 do_verify=1 00:17:41.625 verify=crc32c-intel 00:17:41.625 [job0] 00:17:41.625 filename=/dev/nvme0n1 00:17:41.625 [job1] 00:17:41.625 filename=/dev/nvme0n2 00:17:41.625 [job2] 00:17:41.625 filename=/dev/nvme0n3 00:17:41.625 [job3] 00:17:41.625 filename=/dev/nvme0n4 00:17:41.625 Could not set queue depth (nvme0n1) 00:17:41.625 Could not set queue depth (nvme0n2) 00:17:41.625 Could not set queue depth (nvme0n3) 00:17:41.625 Could not set queue depth (nvme0n4) 00:17:41.884 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.884 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.884 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.884 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:41.884 fio-3.35 00:17:41.884 Starting 4 threads 00:17:43.256 00:17:43.256 job0: (groupid=0, jobs=1): err= 0: pid=3266360: Tue Jul 23 13:59:33 2024 00:17:43.256 read: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1012msec) 00:17:43.256 slat (nsec): min=1076, max=40273k, avg=168094.83, stdev=1498275.26 00:17:43.256 clat (usec): min=5895, max=89283, avg=22367.67, stdev=16358.38 00:17:43.256 lat (usec): min=5898, max=89860, avg=22535.77, stdev=16494.42 00:17:43.256 clat percentiles (usec): 00:17:43.256 | 1.00th=[ 6849], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9503], 00:17:43.256 | 30.00th=[10945], 40.00th=[13304], 50.00th=[14877], 60.00th=[18220], 00:17:43.256 | 70.00th=[28181], 80.00th=[34341], 90.00th=[42206], 95.00th=[62129], 00:17:43.256 | 99.00th=[72877], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:17:43.256 | 99.99th=[89654] 00:17:43.256 write: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1012msec); 0 zone resets 00:17:43.256 slat (nsec): min=1858, max=23700k, avg=148029.37, stdev=1034999.68 00:17:43.256 clat (usec): min=1239, max=97535, avg=19208.64, stdev=16239.62 00:17:43.256 lat (usec): min=1842, max=97545, avg=19356.67, stdev=16360.92 00:17:43.256 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 4817], 5.00th=[ 6783], 10.00th=[ 8029], 20.00th=[ 9241], 00:17:43.257 | 30.00th=[10421], 40.00th=[11076], 50.00th=[12518], 60.00th=[15139], 00:17:43.257 | 70.00th=[17695], 80.00th=[28181], 90.00th=[38536], 95.00th=[47973], 00:17:43.257 | 99.00th=[88605], 99.50th=[90702], 99.90th=[98042], 99.95th=[98042], 00:17:43.257 | 99.99th=[98042] 00:17:43.257 bw ( KiB/s): min= 8192, max=16384, per=21.00%, avg=12288.00, stdev=5792.62, samples=2 00:17:43.257 iops : min= 2048, max= 4096, avg=3072.00, stdev=1448.15, samples=2 00:17:43.257 lat (msec) : 2=0.05%, 4=0.20%, 10=26.09%, 20=40.64%, 50=25.89% 00:17:43.257 lat (msec) : 100=7.14% 00:17:43.257 cpu : usr=1.98%, sys=2.57%, ctx=372, majf=0, minf=1 00:17:43.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:17:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.257 issued rwts: total=3065,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.257 job1: (groupid=0, jobs=1): err= 0: pid=3266361: Tue Jul 23 13:59:33 2024 00:17:43.257 read: IOPS=3468, BW=13.5MiB/s (14.2MB/s)(13.7MiB/1012msec) 00:17:43.257 slat (nsec): min=1608, max=15752k, avg=138310.20, stdev=924075.67 00:17:43.257 clat (usec): min=5671, max=38489, avg=17795.74, stdev=5611.65 00:17:43.257 lat (usec): min=5673, max=38499, avg=17934.05, stdev=5651.65 00:17:43.257 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 7308], 5.00th=[ 8717], 10.00th=[10945], 20.00th=[12780], 00:17:43.257 | 30.00th=[15270], 40.00th=[16057], 50.00th=[17433], 60.00th=[18482], 00:17:43.257 | 70.00th=[20055], 80.00th=[21890], 90.00th=[25297], 95.00th=[28705], 00:17:43.257 | 99.00th=[32637], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:17:43.257 | 99.99th=[38536] 00:17:43.257 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:17:43.257 slat (usec): min=2, max=13501, avg=138.77, stdev=844.91 00:17:43.257 clat (usec): min=1645, max=42224, avg=18313.70, stdev=7207.68 00:17:43.257 lat (usec): min=1658, max=42228, avg=18452.46, stdev=7230.57 00:17:43.257 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 6521], 5.00th=[ 9241], 10.00th=[10683], 20.00th=[12518], 00:17:43.257 | 30.00th=[13698], 40.00th=[14746], 50.00th=[16319], 60.00th=[19006], 00:17:43.257 | 70.00th=[21890], 80.00th=[23725], 90.00th=[28181], 95.00th=[32375], 00:17:43.257 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:17:43.257 | 99.99th=[42206] 00:17:43.257 bw ( KiB/s): min=13264, max=15408, per=24.50%, avg=14336.00, stdev=1516.04, samples=2 00:17:43.257 iops : min= 3316, max= 3852, avg=3584.00, stdev=379.01, samples=2 00:17:43.257 lat (msec) : 2=0.03%, 10=7.06%, 20=60.80%, 50=32.11% 00:17:43.257 cpu : usr=3.26%, sys=3.46%, ctx=420, majf=0, minf=1 00:17:43.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:17:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.257 issued rwts: total=3510,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.257 job2: (groupid=0, jobs=1): err= 0: pid=3266362: Tue Jul 23 13:59:33 2024 00:17:43.257 read: IOPS=4127, BW=16.1MiB/s (16.9MB/s)(16.4MiB/1015msec) 00:17:43.257 slat (nsec): min=1505, max=17329k, avg=109506.25, stdev=717224.04 00:17:43.257 clat (usec): min=4716, max=61653, avg=14038.07, stdev=7351.11 00:17:43.257 lat (usec): min=4724, max=61661, avg=14147.57, stdev=7412.74 00:17:43.257 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 4752], 5.00th=[ 7177], 10.00th=[ 7898], 20.00th=[ 8586], 00:17:43.257 | 30.00th=[ 9372], 40.00th=[10945], 50.00th=[12256], 60.00th=[13566], 00:17:43.257 | 70.00th=[15664], 80.00th=[18220], 90.00th=[21627], 95.00th=[25560], 00:17:43.257 | 99.00th=[46924], 99.50th=[54264], 99.90th=[56361], 99.95th=[61604], 00:17:43.257 | 99.99th=[61604] 00:17:43.257 write: IOPS=4539, BW=17.7MiB/s (18.6MB/s)(18.0MiB/1015msec); 0 zone resets 00:17:43.257 slat (usec): min=2, max=13688, avg=112.99, stdev=572.31 00:17:43.257 clat (usec): min=2401, max=68448, avg=15176.66, stdev=8451.93 00:17:43.257 lat (usec): min=3702, max=68451, avg=15289.65, stdev=8482.54 00:17:43.257 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 4621], 5.00th=[ 5932], 10.00th=[ 7504], 20.00th=[ 9110], 00:17:43.257 | 30.00th=[ 9634], 40.00th=[12125], 50.00th=[14484], 60.00th=[15664], 00:17:43.257 | 70.00th=[16581], 80.00th=[17957], 90.00th=[26870], 95.00th=[30278], 00:17:43.257 | 99.00th=[49546], 99.50th=[65274], 99.90th=[68682], 99.95th=[68682], 00:17:43.257 | 99.99th=[68682] 00:17:43.257 bw ( KiB/s): min=12328, max=24264, per=31.27%, avg=18296.00, stdev=8440.03, samples=2 00:17:43.257 iops : min= 3082, max= 6066, avg=4574.00, stdev=2110.01, samples=2 00:17:43.257 lat (msec) : 4=0.11%, 10=32.73%, 20=53.14%, 50=13.22%, 100=0.80% 00:17:43.257 cpu : usr=2.86%, sys=3.55%, ctx=676, majf=0, minf=1 00:17:43.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.257 issued rwts: total=4189,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.257 job3: (groupid=0, jobs=1): err= 0: pid=3266363: Tue Jul 23 13:59:33 2024 00:17:43.257 read: IOPS=3534, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:17:43.257 slat (nsec): min=1676, max=14553k, avg=136105.37, stdev=946559.50 00:17:43.257 clat (usec): min=4658, max=38730, avg=19233.45, stdev=6034.55 00:17:43.257 lat (usec): min=4664, max=38740, avg=19369.55, stdev=6074.95 00:17:43.257 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 6259], 5.00th=[ 8848], 10.00th=[11863], 20.00th=[14615], 00:17:43.257 | 30.00th=[16057], 40.00th=[17171], 50.00th=[19268], 60.00th=[20317], 00:17:43.257 | 70.00th=[21627], 80.00th=[23725], 90.00th=[27132], 95.00th=[30278], 00:17:43.257 | 99.00th=[35914], 99.50th=[35914], 99.90th=[38536], 99.95th=[38536], 00:17:43.257 | 99.99th=[38536] 00:17:43.257 write: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec); 0 zone resets 00:17:43.257 slat (usec): min=2, max=13301, avg=119.66, stdev=803.06 00:17:43.257 clat (usec): min=1128, max=44787, avg=16566.31, stdev=7627.18 00:17:43.257 lat (usec): min=1143, max=44791, avg=16685.97, stdev=7656.93 00:17:43.257 clat percentiles (usec): 00:17:43.257 | 1.00th=[ 3818], 5.00th=[ 7439], 10.00th=[ 7898], 20.00th=[10552], 00:17:43.257 | 30.00th=[11994], 40.00th=[13566], 50.00th=[14484], 60.00th=[16319], 00:17:43.257 | 70.00th=[19006], 80.00th=[23725], 90.00th=[26870], 95.00th=[31327], 00:17:43.257 | 99.00th=[40109], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:17:43.257 | 99.99th=[44827] 00:17:43.257 bw ( KiB/s): min=12288, max=16384, per=24.50%, avg=14336.00, stdev=2896.31, samples=2 00:17:43.257 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:17:43.257 lat (msec) : 2=0.04%, 4=0.71%, 10=11.93%, 20=50.96%, 50=36.36% 00:17:43.257 cpu : usr=2.77%, sys=4.45%, ctx=389, majf=0, minf=1 00:17:43.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:17:43.257 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.257 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:43.257 issued rwts: total=3577,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.257 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:43.257 00:17:43.257 Run status group 0 (all jobs): 00:17:43.257 READ: bw=55.2MiB/s (57.9MB/s), 11.8MiB/s-16.1MiB/s (12.4MB/s-16.9MB/s), io=56.0MiB (58.7MB), run=1012-1015msec 00:17:43.257 WRITE: bw=57.1MiB/s (59.9MB/s), 11.9MiB/s-17.7MiB/s (12.4MB/s-18.6MB/s), io=58.0MiB (60.8MB), run=1012-1015msec 00:17:43.257 00:17:43.257 Disk stats (read/write): 00:17:43.257 nvme0n1: ios=2490/2560, merge=0/0, ticks=32530/31575, in_queue=64105, util=94.59% 00:17:43.257 nvme0n2: ios=3108/3143, merge=0/0, ticks=54524/52240, in_queue=106764, util=96.75% 00:17:43.257 nvme0n3: ios=4045/4096, merge=0/0, ticks=52758/53271, in_queue=106029, util=92.95% 00:17:43.257 nvme0n4: ios=3095/3166, merge=0/0, ticks=59893/46144, in_queue=106037, util=97.70% 00:17:43.257 13:59:33 -- target/fio.sh@55 -- # sync 00:17:43.257 13:59:33 -- target/fio.sh@59 -- # fio_pid=3266597 00:17:43.257 13:59:33 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:43.257 13:59:33 -- target/fio.sh@61 -- # sleep 3 00:17:43.257 [global] 00:17:43.257 thread=1 00:17:43.257 invalidate=1 00:17:43.257 rw=read 00:17:43.257 time_based=1 00:17:43.257 runtime=10 00:17:43.257 ioengine=libaio 00:17:43.257 direct=1 00:17:43.257 bs=4096 00:17:43.257 iodepth=1 00:17:43.257 norandommap=1 00:17:43.257 numjobs=1 00:17:43.257 00:17:43.257 [job0] 00:17:43.257 filename=/dev/nvme0n1 00:17:43.257 [job1] 00:17:43.257 filename=/dev/nvme0n2 00:17:43.257 [job2] 00:17:43.258 filename=/dev/nvme0n3 00:17:43.258 [job3] 00:17:43.258 filename=/dev/nvme0n4 00:17:43.258 Could not set queue depth (nvme0n1) 00:17:43.258 Could not set queue depth (nvme0n2) 00:17:43.258 Could not set queue depth (nvme0n3) 00:17:43.258 Could not set queue depth (nvme0n4) 00:17:43.258 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.258 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.258 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.258 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.258 fio-3.35 00:17:43.258 Starting 4 threads 00:17:46.533 13:59:36 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:46.533 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21712896, buflen=4096 00:17:46.533 fio: pid=3266749, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:46.533 13:59:37 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:46.533 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=282624, buflen=4096 00:17:46.533 fio: pid=3266747, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:46.533 13:59:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.533 13:59:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:46.533 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=18305024, buflen=4096 00:17:46.533 fio: pid=3266734, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:46.533 13:59:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.533 13:59:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:46.791 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=22253568, buflen=4096 00:17:46.791 fio: pid=3266740, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:17:46.791 13:59:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:46.791 13:59:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:46.791 00:17:46.791 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3266734: Tue Jul 23 13:59:37 2024 00:17:46.791 read: IOPS=1453, BW=5813KiB/s (5953kB/s)(17.5MiB/3075msec) 00:17:46.791 slat (usec): min=3, max=11637, avg=11.01, stdev=173.99 00:17:46.791 clat (usec): min=305, max=42726, avg=675.42, stdev=2556.24 00:17:46.791 lat (usec): min=311, max=53061, avg=686.43, stdev=2603.97 00:17:46.791 clat percentiles (usec): 00:17:46.791 | 1.00th=[ 347], 5.00th=[ 392], 10.00th=[ 404], 20.00th=[ 416], 00:17:46.791 | 30.00th=[ 478], 40.00th=[ 498], 50.00th=[ 510], 60.00th=[ 523], 00:17:46.791 | 70.00th=[ 537], 80.00th=[ 578], 90.00th=[ 660], 95.00th=[ 709], 00:17:46.791 | 99.00th=[ 881], 99.50th=[ 1139], 99.90th=[42206], 99.95th=[42206], 00:17:46.791 | 99.99th=[42730] 00:17:46.791 bw ( KiB/s): min= 5880, max= 8248, per=37.97%, avg=7128.00, stdev=1018.25, samples=5 00:17:46.791 iops : min= 1470, max= 2062, avg=1782.00, stdev=254.56, samples=5 00:17:46.791 lat (usec) : 500=40.89%, 750=55.97%, 1000=2.55% 00:17:46.791 lat (msec) : 2=0.16%, 4=0.02%, 50=0.38% 00:17:46.791 cpu : usr=0.49%, sys=1.50%, ctx=4471, majf=0, minf=1 00:17:46.791 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.791 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.791 issued rwts: total=4470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.791 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:46.791 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=3266740: Tue Jul 23 13:59:37 2024 00:17:46.791 read: IOPS=1669, BW=6679KiB/s (6839kB/s)(21.2MiB/3254msec) 00:17:46.791 slat (usec): min=5, max=13409, avg=19.19, stdev=359.72 00:17:46.791 clat (usec): min=326, max=42086, avg=578.08, stdev=1491.25 00:17:46.791 lat (usec): min=332, max=53541, avg=597.27, stdev=1625.08 00:17:46.791 clat percentiles (usec): 00:17:46.791 | 1.00th=[ 355], 5.00th=[ 400], 10.00th=[ 408], 20.00th=[ 433], 00:17:46.791 | 30.00th=[ 461], 40.00th=[ 486], 50.00th=[ 502], 60.00th=[ 519], 00:17:46.791 | 70.00th=[ 545], 80.00th=[ 594], 90.00th=[ 676], 95.00th=[ 750], 00:17:46.791 | 99.00th=[ 873], 99.50th=[ 996], 99.90th=[41681], 99.95th=[42206], 00:17:46.791 | 99.99th=[42206] 00:17:46.791 bw ( KiB/s): min= 5488, max= 8128, per=37.30%, avg=7002.83, stdev=851.49, samples=6 00:17:46.791 iops : min= 1372, max= 2032, avg=1750.67, stdev=212.87, samples=6 00:17:46.791 lat (usec) : 500=49.91%, 750=45.14%, 1000=4.44% 00:17:46.791 lat (msec) : 2=0.35%, 20=0.02%, 50=0.13% 00:17:46.791 cpu : usr=0.58%, sys=1.72%, ctx=5440, majf=0, minf=1 00:17:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.792 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.792 issued rwts: total=5434,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:46.792 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3266747: Tue Jul 23 13:59:37 2024 00:17:46.792 read: IOPS=24, BW=96.5KiB/s (98.8kB/s)(276KiB/2861msec) 00:17:46.792 slat (nsec): min=17631, max=29472, avg=23313.77, stdev=1477.76 00:17:46.792 clat (usec): min=1554, max=43215, avg=41423.27, stdev=4883.20 00:17:46.792 lat (usec): min=1583, max=43239, avg=41446.56, stdev=4882.44 00:17:46.792 clat percentiles (usec): 00:17:46.792 | 1.00th=[ 1549], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:17:46.792 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:46.792 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:17:46.792 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:17:46.792 | 99.99th=[43254] 00:17:46.792 bw ( KiB/s): min= 96, max= 96, per=0.51%, avg=96.00, stdev= 0.00, samples=5 00:17:46.792 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:17:46.792 lat (msec) : 2=1.43%, 50=97.14% 00:17:46.792 cpu : usr=0.00%, sys=0.10%, ctx=72, majf=0, minf=1 00:17:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.792 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.792 issued rwts: total=70,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:46.792 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3266749: Tue Jul 23 13:59:37 2024 00:17:46.792 read: IOPS=1983, BW=7933KiB/s (8123kB/s)(20.7MiB/2673msec) 00:17:46.792 slat (nsec): min=6052, max=35445, avg=8452.70, stdev=4604.80 00:17:46.792 clat (usec): min=308, max=1044, avg=494.11, stdev=78.67 00:17:46.792 lat (usec): min=315, max=1051, avg=502.57, stdev=81.68 00:17:46.792 clat percentiles (usec): 00:17:46.792 | 1.00th=[ 347], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 445], 00:17:46.792 | 30.00th=[ 461], 40.00th=[ 474], 50.00th=[ 482], 60.00th=[ 494], 00:17:46.792 | 70.00th=[ 502], 80.00th=[ 523], 90.00th=[ 619], 95.00th=[ 660], 00:17:46.792 | 99.00th=[ 717], 99.50th=[ 758], 99.90th=[ 922], 99.95th=[ 979], 00:17:46.792 | 99.99th=[ 1045] 00:17:46.792 bw ( KiB/s): min= 7080, max= 8696, per=42.40%, avg=7960.00, stdev=696.16, samples=5 00:17:46.792 iops : min= 1770, max= 2174, avg=1990.00, stdev=174.04, samples=5 00:17:46.792 lat (usec) : 500=67.37%, 750=32.08%, 1000=0.49% 00:17:46.792 lat (msec) : 2=0.04% 00:17:46.792 cpu : usr=0.64%, sys=2.06%, ctx=5302, majf=0, minf=2 00:17:46.792 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.792 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.792 issued rwts: total=5302,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.792 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:46.792 00:17:46.792 Run status group 0 (all jobs): 00:17:46.792 READ: bw=18.3MiB/s (19.2MB/s), 96.5KiB/s-7933KiB/s (98.8kB/s-8123kB/s), io=59.7MiB (62.6MB), run=2673-3254msec 00:17:46.792 00:17:46.792 Disk stats (read/write): 00:17:46.792 nvme0n1: ios=4464/0, merge=0/0, ticks=2766/0, in_queue=2766, util=95.26% 00:17:46.792 nvme0n2: ios=5429/0, merge=0/0, ticks=2930/0, in_queue=2930, util=94.47% 00:17:46.792 nvme0n3: ios=116/0, merge=0/0, ticks=3007/0, in_queue=3007, util=99.83% 00:17:46.792 nvme0n4: ios=5154/0, merge=0/0, ticks=2519/0, in_queue=2519, util=96.41% 00:17:47.050 13:59:37 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:47.050 13:59:37 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:47.050 13:59:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:47.050 13:59:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:47.307 13:59:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:47.307 13:59:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:47.565 13:59:38 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:47.565 13:59:38 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:47.565 13:59:38 -- target/fio.sh@69 -- # fio_status=0 00:17:47.565 13:59:38 -- target/fio.sh@70 -- # wait 3266597 00:17:47.565 13:59:38 -- target/fio.sh@70 -- # fio_status=4 00:17:47.565 13:59:38 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.823 13:59:38 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:47.823 13:59:38 -- common/autotest_common.sh@1198 -- # local i=0 00:17:47.823 13:59:38 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:17:47.823 13:59:38 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.823 13:59:38 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:17:47.823 13:59:38 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:47.823 13:59:38 -- common/autotest_common.sh@1210 -- # return 0 00:17:47.823 13:59:38 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:47.823 13:59:38 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:47.823 nvmf hotplug test: fio failed as expected 00:17:47.823 13:59:38 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:48.081 13:59:38 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:48.081 13:59:38 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:48.081 13:59:38 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:48.081 13:59:38 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:48.081 13:59:38 -- target/fio.sh@91 -- # nvmftestfini 00:17:48.081 13:59:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:48.081 13:59:38 -- nvmf/common.sh@116 -- # sync 00:17:48.081 13:59:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:48.081 13:59:38 -- nvmf/common.sh@119 -- # set +e 00:17:48.081 13:59:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:48.081 13:59:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:48.081 rmmod nvme_tcp 00:17:48.081 rmmod nvme_fabrics 00:17:48.081 rmmod nvme_keyring 00:17:48.081 13:59:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:48.081 13:59:38 -- nvmf/common.sh@123 -- # set -e 00:17:48.081 13:59:38 -- nvmf/common.sh@124 -- # return 0 00:17:48.081 13:59:38 -- nvmf/common.sh@477 -- # '[' -n 3263809 ']' 00:17:48.081 13:59:38 -- nvmf/common.sh@478 -- # killprocess 3263809 00:17:48.081 13:59:38 -- common/autotest_common.sh@926 -- # '[' -z 3263809 ']' 00:17:48.081 13:59:38 -- common/autotest_common.sh@930 -- # kill -0 3263809 00:17:48.081 13:59:38 -- common/autotest_common.sh@931 -- # uname 00:17:48.081 13:59:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:48.081 13:59:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3263809 00:17:48.081 13:59:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:48.081 13:59:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:48.081 13:59:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3263809' 00:17:48.081 killing process with pid 3263809 00:17:48.081 13:59:38 -- common/autotest_common.sh@945 -- # kill 3263809 00:17:48.081 13:59:38 -- common/autotest_common.sh@950 -- # wait 3263809 00:17:48.349 13:59:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:48.349 13:59:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:48.349 13:59:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:48.349 13:59:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:48.349 13:59:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:48.349 13:59:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.349 13:59:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:48.349 13:59:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.276 13:59:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:50.276 00:17:50.276 real 0m26.256s 00:17:50.276 user 1m44.914s 00:17:50.276 sys 0m7.760s 00:17:50.276 13:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:50.276 13:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:50.276 ************************************ 00:17:50.276 END TEST nvmf_fio_target 00:17:50.276 ************************************ 00:17:50.276 13:59:41 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:50.276 13:59:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:50.276 13:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:50.276 13:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:50.276 ************************************ 00:17:50.276 START TEST nvmf_bdevio 00:17:50.276 ************************************ 00:17:50.276 13:59:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:50.534 * Looking for test storage... 00:17:50.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:50.534 13:59:41 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:50.534 13:59:41 -- nvmf/common.sh@7 -- # uname -s 00:17:50.534 13:59:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:50.534 13:59:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:50.534 13:59:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:50.534 13:59:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:50.534 13:59:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:50.534 13:59:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:50.534 13:59:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:50.534 13:59:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:50.534 13:59:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:50.534 13:59:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:50.534 13:59:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.534 13:59:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.534 13:59:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:50.534 13:59:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:50.534 13:59:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:50.534 13:59:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:50.534 13:59:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:50.534 13:59:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:50.534 13:59:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:50.534 13:59:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.534 13:59:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.534 13:59:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.534 13:59:41 -- paths/export.sh@5 -- # export PATH 00:17:50.534 13:59:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:50.534 13:59:41 -- nvmf/common.sh@46 -- # : 0 00:17:50.534 13:59:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:50.534 13:59:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:50.534 13:59:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:50.534 13:59:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:50.534 13:59:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:50.534 13:59:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:50.534 13:59:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:50.534 13:59:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:50.534 13:59:41 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:50.534 13:59:41 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:50.534 13:59:41 -- target/bdevio.sh@14 -- # nvmftestinit 00:17:50.534 13:59:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:50.534 13:59:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:50.534 13:59:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:50.534 13:59:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:50.534 13:59:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:50.535 13:59:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.535 13:59:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.535 13:59:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:50.535 13:59:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:50.535 13:59:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:50.535 13:59:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:50.535 13:59:41 -- common/autotest_common.sh@10 -- # set +x 00:17:55.804 13:59:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:55.804 13:59:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:55.804 13:59:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:55.804 13:59:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:55.804 13:59:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:55.804 13:59:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:55.804 13:59:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:55.804 13:59:46 -- nvmf/common.sh@294 -- # net_devs=() 00:17:55.804 13:59:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:55.804 13:59:46 -- nvmf/common.sh@295 -- # e810=() 00:17:55.804 13:59:46 -- nvmf/common.sh@295 -- # local -ga e810 00:17:55.804 13:59:46 -- nvmf/common.sh@296 -- # x722=() 00:17:55.804 13:59:46 -- nvmf/common.sh@296 -- # local -ga x722 00:17:55.804 13:59:46 -- nvmf/common.sh@297 -- # mlx=() 00:17:55.804 13:59:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:55.804 13:59:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.804 13:59:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:55.804 13:59:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:55.804 13:59:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:55.804 13:59:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.804 13:59:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:55.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:55.804 13:59:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:55.804 13:59:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:55.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:55.804 13:59:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:55.804 13:59:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.804 13:59:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.804 13:59:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.804 13:59:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.804 13:59:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:55.804 Found net devices under 0000:86:00.0: cvl_0_0 00:17:55.804 13:59:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.804 13:59:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:55.804 13:59:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.804 13:59:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:55.804 13:59:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.804 13:59:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:55.804 Found net devices under 0000:86:00.1: cvl_0_1 00:17:55.804 13:59:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.804 13:59:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:55.804 13:59:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:55.804 13:59:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:55.804 13:59:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:55.804 13:59:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.804 13:59:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.804 13:59:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.804 13:59:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:55.804 13:59:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.804 13:59:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.804 13:59:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:55.804 13:59:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.804 13:59:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.804 13:59:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:55.804 13:59:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:55.804 13:59:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.804 13:59:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.804 13:59:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.804 13:59:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.804 13:59:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:55.804 13:59:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:56.065 13:59:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.065 13:59:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.065 13:59:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:56.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:17:56.065 00:17:56.065 --- 10.0.0.2 ping statistics --- 00:17:56.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.065 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:17:56.065 13:59:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:17:56.065 00:17:56.065 --- 10.0.0.1 ping statistics --- 00:17:56.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.065 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:17:56.065 13:59:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.065 13:59:46 -- nvmf/common.sh@410 -- # return 0 00:17:56.065 13:59:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:56.065 13:59:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.065 13:59:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:56.065 13:59:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:56.065 13:59:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.065 13:59:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:56.065 13:59:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:56.065 13:59:46 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:56.065 13:59:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:56.065 13:59:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:56.065 13:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:56.065 13:59:46 -- nvmf/common.sh@469 -- # nvmfpid=3271007 00:17:56.065 13:59:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:17:56.065 13:59:46 -- nvmf/common.sh@470 -- # waitforlisten 3271007 00:17:56.065 13:59:46 -- common/autotest_common.sh@819 -- # '[' -z 3271007 ']' 00:17:56.065 13:59:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.065 13:59:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:56.065 13:59:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.065 13:59:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:56.065 13:59:46 -- common/autotest_common.sh@10 -- # set +x 00:17:56.065 [2024-07-23 13:59:46.946743] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:56.065 [2024-07-23 13:59:46.946783] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.065 EAL: No free 2048 kB hugepages reported on node 1 00:17:56.065 [2024-07-23 13:59:47.004085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:56.323 [2024-07-23 13:59:47.082288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:56.323 [2024-07-23 13:59:47.082394] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.323 [2024-07-23 13:59:47.082402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.323 [2024-07-23 13:59:47.082409] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.323 [2024-07-23 13:59:47.082518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:56.323 [2024-07-23 13:59:47.082554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:56.323 [2024-07-23 13:59:47.082668] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:56.323 [2024-07-23 13:59:47.082669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:56.887 13:59:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:56.887 13:59:47 -- common/autotest_common.sh@852 -- # return 0 00:17:56.888 13:59:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:56.888 13:59:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:56.888 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 13:59:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.888 13:59:47 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.888 13:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.888 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 [2024-07-23 13:59:47.791304] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.888 13:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.888 13:59:47 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:56.888 13:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.888 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 Malloc0 00:17:56.888 13:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.888 13:59:47 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:56.888 13:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.888 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 13:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.888 13:59:47 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:56.888 13:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.888 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 13:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.888 13:59:47 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.888 13:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:56.888 13:59:47 -- common/autotest_common.sh@10 -- # set +x 00:17:56.888 [2024-07-23 13:59:47.842894] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.888 13:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:56.888 13:59:47 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:17:56.888 13:59:47 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:56.888 13:59:47 -- nvmf/common.sh@520 -- # config=() 00:17:56.888 13:59:47 -- nvmf/common.sh@520 -- # local subsystem config 00:17:56.888 13:59:47 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:56.888 13:59:47 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:56.888 { 00:17:56.888 "params": { 00:17:56.888 "name": "Nvme$subsystem", 00:17:56.888 "trtype": "$TEST_TRANSPORT", 00:17:56.888 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:56.888 "adrfam": "ipv4", 00:17:56.888 "trsvcid": "$NVMF_PORT", 00:17:56.888 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:56.888 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:56.888 "hdgst": ${hdgst:-false}, 00:17:56.888 "ddgst": ${ddgst:-false} 00:17:56.888 }, 00:17:56.888 "method": "bdev_nvme_attach_controller" 00:17:56.888 } 00:17:56.888 EOF 00:17:56.888 )") 00:17:56.888 13:59:47 -- nvmf/common.sh@542 -- # cat 00:17:56.888 13:59:47 -- nvmf/common.sh@544 -- # jq . 00:17:56.888 13:59:47 -- nvmf/common.sh@545 -- # IFS=, 00:17:56.888 13:59:47 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:56.888 "params": { 00:17:56.888 "name": "Nvme1", 00:17:56.888 "trtype": "tcp", 00:17:56.888 "traddr": "10.0.0.2", 00:17:56.888 "adrfam": "ipv4", 00:17:56.888 "trsvcid": "4420", 00:17:56.888 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.888 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.888 "hdgst": false, 00:17:56.888 "ddgst": false 00:17:56.888 }, 00:17:56.888 "method": "bdev_nvme_attach_controller" 00:17:56.888 }' 00:17:56.888 [2024-07-23 13:59:47.886871] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:17:56.888 [2024-07-23 13:59:47.886915] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3271258 ] 00:17:57.145 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.145 [2024-07-23 13:59:47.941060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:57.145 [2024-07-23 13:59:48.016621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.145 [2024-07-23 13:59:48.016714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.145 [2024-07-23 13:59:48.016715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.403 [2024-07-23 13:59:48.292997] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:57.403 [2024-07-23 13:59:48.293034] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:57.403 I/O targets: 00:17:57.403 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:57.403 00:17:57.403 00:17:57.403 CUnit - A unit testing framework for C - Version 2.1-3 00:17:57.403 http://cunit.sourceforge.net/ 00:17:57.403 00:17:57.403 00:17:57.403 Suite: bdevio tests on: Nvme1n1 00:17:57.403 Test: blockdev write read block ...passed 00:17:57.403 Test: blockdev write zeroes read block ...passed 00:17:57.403 Test: blockdev write zeroes read no split ...passed 00:17:57.660 Test: blockdev write zeroes read split ...passed 00:17:57.660 Test: blockdev write zeroes read split partial ...passed 00:17:57.660 Test: blockdev reset ...[2024-07-23 13:59:48.527863] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:57.660 [2024-07-23 13:59:48.527919] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a99590 (9): Bad file descriptor 00:17:57.660 [2024-07-23 13:59:48.541521] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:57.660 passed 00:17:57.660 Test: blockdev write read 8 blocks ...passed 00:17:57.660 Test: blockdev write read size > 128k ...passed 00:17:57.660 Test: blockdev write read invalid size ...passed 00:17:57.660 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:57.660 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:57.660 Test: blockdev write read max offset ...passed 00:17:57.660 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:57.917 Test: blockdev writev readv 8 blocks ...passed 00:17:57.917 Test: blockdev writev readv 30 x 1block ...passed 00:17:57.917 Test: blockdev writev readv block ...passed 00:17:57.917 Test: blockdev writev readv size > 128k ...passed 00:17:57.917 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:57.917 Test: blockdev comparev and writev ...[2024-07-23 13:59:48.769228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.917 [2024-07-23 13:59:48.769256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:57.917 [2024-07-23 13:59:48.769269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.917 [2024-07-23 13:59:48.769277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:57.917 [2024-07-23 13:59:48.769750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.917 [2024-07-23 13:59:48.769761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:57.917 [2024-07-23 13:59:48.769772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.917 [2024-07-23 13:59:48.769780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.770230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.918 [2024-07-23 13:59:48.770241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.770253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.918 [2024-07-23 13:59:48.770260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.770715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.918 [2024-07-23 13:59:48.770726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.770737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:57.918 [2024-07-23 13:59:48.770748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:57.918 passed 00:17:57.918 Test: blockdev nvme passthru rw ...passed 00:17:57.918 Test: blockdev nvme passthru vendor specific ...[2024-07-23 13:59:48.854919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.918 [2024-07-23 13:59:48.854936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.855443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.918 [2024-07-23 13:59:48.855454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.855991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.918 [2024-07-23 13:59:48.856000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:57.918 [2024-07-23 13:59:48.856502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.918 [2024-07-23 13:59:48.856512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:57.918 passed 00:17:57.918 Test: blockdev nvme admin passthru ...passed 00:17:57.918 Test: blockdev copy ...passed 00:17:57.918 00:17:57.918 Run Summary: Type Total Ran Passed Failed Inactive 00:17:57.918 suites 1 1 n/a 0 0 00:17:57.918 tests 23 23 23 0 0 00:17:57.918 asserts 152 152 152 0 n/a 00:17:57.918 00:17:57.918 Elapsed time = 1.231 seconds 00:17:58.175 13:59:49 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.175 13:59:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.175 13:59:49 -- common/autotest_common.sh@10 -- # set +x 00:17:58.175 13:59:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.175 13:59:49 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:58.175 13:59:49 -- target/bdevio.sh@30 -- # nvmftestfini 00:17:58.175 13:59:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.175 13:59:49 -- nvmf/common.sh@116 -- # sync 00:17:58.175 13:59:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:58.175 13:59:49 -- nvmf/common.sh@119 -- # set +e 00:17:58.175 13:59:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:58.175 13:59:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:58.175 rmmod nvme_tcp 00:17:58.175 rmmod nvme_fabrics 00:17:58.175 rmmod nvme_keyring 00:17:58.175 13:59:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:58.175 13:59:49 -- nvmf/common.sh@123 -- # set -e 00:17:58.175 13:59:49 -- nvmf/common.sh@124 -- # return 0 00:17:58.175 13:59:49 -- nvmf/common.sh@477 -- # '[' -n 3271007 ']' 00:17:58.175 13:59:49 -- nvmf/common.sh@478 -- # killprocess 3271007 00:17:58.175 13:59:49 -- common/autotest_common.sh@926 -- # '[' -z 3271007 ']' 00:17:58.175 13:59:49 -- common/autotest_common.sh@930 -- # kill -0 3271007 00:17:58.175 13:59:49 -- common/autotest_common.sh@931 -- # uname 00:17:58.175 13:59:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:58.175 13:59:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3271007 00:17:58.433 13:59:49 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:17:58.433 13:59:49 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:17:58.433 13:59:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3271007' 00:17:58.433 killing process with pid 3271007 00:17:58.433 13:59:49 -- common/autotest_common.sh@945 -- # kill 3271007 00:17:58.433 13:59:49 -- common/autotest_common.sh@950 -- # wait 3271007 00:17:58.433 13:59:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:58.433 13:59:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:58.433 13:59:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:58.433 13:59:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:58.433 13:59:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:58.433 13:59:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.433 13:59:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:58.433 13:59:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.969 13:59:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:00.969 00:18:00.969 real 0m10.213s 00:18:00.969 user 0m13.328s 00:18:00.969 sys 0m4.629s 00:18:00.969 13:59:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:00.969 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:18:00.970 ************************************ 00:18:00.970 END TEST nvmf_bdevio 00:18:00.970 ************************************ 00:18:00.970 13:59:51 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:18:00.970 13:59:51 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.970 13:59:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:18:00.970 13:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:00.970 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:18:00.970 ************************************ 00:18:00.970 START TEST nvmf_bdevio_no_huge 00:18:00.970 ************************************ 00:18:00.970 13:59:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:00.970 * Looking for test storage... 00:18:00.970 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.970 13:59:51 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.970 13:59:51 -- nvmf/common.sh@7 -- # uname -s 00:18:00.970 13:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.970 13:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.970 13:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.970 13:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.970 13:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:00.970 13:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:00.970 13:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.970 13:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:00.970 13:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.970 13:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:00.970 13:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.970 13:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.970 13:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.970 13:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:00.970 13:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:00.970 13:59:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.970 13:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.970 13:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.970 13:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.970 13:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.970 13:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.970 13:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.970 13:59:51 -- paths/export.sh@5 -- # export PATH 00:18:00.970 13:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.970 13:59:51 -- nvmf/common.sh@46 -- # : 0 00:18:00.970 13:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:00.970 13:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:00.970 13:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:00.970 13:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.970 13:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.970 13:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:00.970 13:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:00.970 13:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:00.970 13:59:51 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:00.970 13:59:51 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:00.970 13:59:51 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:00.970 13:59:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:00.970 13:59:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.970 13:59:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:00.970 13:59:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:00.970 13:59:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:00.970 13:59:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:00.970 13:59:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:00.970 13:59:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:00.970 13:59:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:00.970 13:59:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:00.970 13:59:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:00.970 13:59:51 -- common/autotest_common.sh@10 -- # set +x 00:18:06.245 13:59:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:06.245 13:59:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:06.245 13:59:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:06.245 13:59:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:06.245 13:59:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:06.245 13:59:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:06.245 13:59:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:06.245 13:59:56 -- nvmf/common.sh@294 -- # net_devs=() 00:18:06.245 13:59:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:06.245 13:59:56 -- nvmf/common.sh@295 -- # e810=() 00:18:06.245 13:59:56 -- nvmf/common.sh@295 -- # local -ga e810 00:18:06.245 13:59:56 -- nvmf/common.sh@296 -- # x722=() 00:18:06.245 13:59:56 -- nvmf/common.sh@296 -- # local -ga x722 00:18:06.245 13:59:56 -- nvmf/common.sh@297 -- # mlx=() 00:18:06.245 13:59:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:06.245 13:59:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:06.245 13:59:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:06.245 13:59:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:06.245 13:59:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:06.245 13:59:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:06.245 13:59:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:06.245 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:06.245 13:59:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:06.245 13:59:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:06.245 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:06.245 13:59:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:06.245 13:59:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:06.245 13:59:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:06.245 13:59:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.245 13:59:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:06.245 13:59:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.245 13:59:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:06.245 Found net devices under 0000:86:00.0: cvl_0_0 00:18:06.246 13:59:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.246 13:59:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:06.246 13:59:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:06.246 13:59:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:06.246 13:59:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:06.246 13:59:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:06.246 Found net devices under 0000:86:00.1: cvl_0_1 00:18:06.246 13:59:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:06.246 13:59:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:06.246 13:59:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:06.246 13:59:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:06.246 13:59:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:06.246 13:59:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:06.246 13:59:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:06.246 13:59:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:06.246 13:59:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:06.246 13:59:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:06.246 13:59:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:06.246 13:59:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:06.246 13:59:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:06.246 13:59:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:06.246 13:59:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:06.246 13:59:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:06.246 13:59:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:06.246 13:59:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:06.246 13:59:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:06.246 13:59:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:06.246 13:59:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:06.246 13:59:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:06.246 13:59:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:06.246 13:59:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:06.246 13:59:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:06.246 13:59:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:06.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:06.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:18:06.246 00:18:06.246 --- 10.0.0.2 ping statistics --- 00:18:06.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.246 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:18:06.246 13:59:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:06.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:06.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:18:06.246 00:18:06.246 --- 10.0.0.1 ping statistics --- 00:18:06.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:06.246 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:18:06.246 13:59:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:06.246 13:59:56 -- nvmf/common.sh@410 -- # return 0 00:18:06.246 13:59:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:06.246 13:59:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:06.246 13:59:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:06.246 13:59:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:06.246 13:59:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:06.246 13:59:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:06.246 13:59:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:06.246 13:59:56 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:06.246 13:59:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:06.246 13:59:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:06.246 13:59:56 -- common/autotest_common.sh@10 -- # set +x 00:18:06.246 13:59:56 -- nvmf/common.sh@469 -- # nvmfpid=3274813 00:18:06.246 13:59:56 -- nvmf/common.sh@470 -- # waitforlisten 3274813 00:18:06.246 13:59:56 -- common/autotest_common.sh@819 -- # '[' -z 3274813 ']' 00:18:06.246 13:59:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.246 13:59:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:06.246 13:59:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.246 13:59:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:06.246 13:59:56 -- common/autotest_common.sh@10 -- # set +x 00:18:06.246 13:59:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:06.246 [2024-07-23 13:59:57.002433] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:06.246 [2024-07-23 13:59:57.002478] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:06.246 [2024-07-23 13:59:57.065441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:06.246 [2024-07-23 13:59:57.147388] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:06.246 [2024-07-23 13:59:57.147491] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:06.246 [2024-07-23 13:59:57.147499] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:06.246 [2024-07-23 13:59:57.147506] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:06.246 [2024-07-23 13:59:57.147615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:06.246 [2024-07-23 13:59:57.147724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:06.246 [2024-07-23 13:59:57.147830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:06.246 [2024-07-23 13:59:57.147832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:06.812 13:59:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:06.812 13:59:57 -- common/autotest_common.sh@852 -- # return 0 00:18:06.812 13:59:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:06.812 13:59:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:06.812 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 13:59:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:07.070 13:59:57 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:07.070 13:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:07.070 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 [2024-07-23 13:59:57.835621] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:07.070 13:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:07.070 13:59:57 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:07.070 13:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:07.070 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 Malloc0 00:18:07.070 13:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:07.070 13:59:57 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:07.070 13:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:07.070 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 13:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:07.070 13:59:57 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:07.070 13:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:07.070 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 13:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:07.070 13:59:57 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:07.070 13:59:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:07.070 13:59:57 -- common/autotest_common.sh@10 -- # set +x 00:18:07.070 [2024-07-23 13:59:57.879895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:07.070 13:59:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:07.070 13:59:57 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:07.070 13:59:57 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:07.070 13:59:57 -- nvmf/common.sh@520 -- # config=() 00:18:07.070 13:59:57 -- nvmf/common.sh@520 -- # local subsystem config 00:18:07.071 13:59:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:07.071 13:59:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:07.071 { 00:18:07.071 "params": { 00:18:07.071 "name": "Nvme$subsystem", 00:18:07.071 "trtype": "$TEST_TRANSPORT", 00:18:07.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:07.071 "adrfam": "ipv4", 00:18:07.071 "trsvcid": "$NVMF_PORT", 00:18:07.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:07.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:07.071 "hdgst": ${hdgst:-false}, 00:18:07.071 "ddgst": ${ddgst:-false} 00:18:07.071 }, 00:18:07.071 "method": "bdev_nvme_attach_controller" 00:18:07.071 } 00:18:07.071 EOF 00:18:07.071 )") 00:18:07.071 13:59:57 -- nvmf/common.sh@542 -- # cat 00:18:07.071 13:59:57 -- nvmf/common.sh@544 -- # jq . 00:18:07.071 13:59:57 -- nvmf/common.sh@545 -- # IFS=, 00:18:07.071 13:59:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:07.071 "params": { 00:18:07.071 "name": "Nvme1", 00:18:07.071 "trtype": "tcp", 00:18:07.071 "traddr": "10.0.0.2", 00:18:07.071 "adrfam": "ipv4", 00:18:07.071 "trsvcid": "4420", 00:18:07.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:07.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:07.071 "hdgst": false, 00:18:07.071 "ddgst": false 00:18:07.071 }, 00:18:07.071 "method": "bdev_nvme_attach_controller" 00:18:07.071 }' 00:18:07.071 [2024-07-23 13:59:57.927601] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:07.071 [2024-07-23 13:59:57.927644] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3275063 ] 00:18:07.071 [2024-07-23 13:59:57.984187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:07.071 [2024-07-23 13:59:58.068109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:07.071 [2024-07-23 13:59:58.068205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.071 [2024-07-23 13:59:58.068205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.329 [2024-07-23 13:59:58.328290] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:18:07.329 [2024-07-23 13:59:58.328321] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:18:07.329 I/O targets: 00:18:07.329 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:07.329 00:18:07.329 00:18:07.329 CUnit - A unit testing framework for C - Version 2.1-3 00:18:07.329 http://cunit.sourceforge.net/ 00:18:07.329 00:18:07.329 00:18:07.329 Suite: bdevio tests on: Nvme1n1 00:18:07.588 Test: blockdev write read block ...passed 00:18:07.588 Test: blockdev write zeroes read block ...passed 00:18:07.588 Test: blockdev write zeroes read no split ...passed 00:18:07.588 Test: blockdev write zeroes read split ...passed 00:18:07.588 Test: blockdev write zeroes read split partial ...passed 00:18:07.588 Test: blockdev reset ...[2024-07-23 13:59:58.525343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:07.588 [2024-07-23 13:59:58.525397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7ea0 (9): Bad file descriptor 00:18:07.588 [2024-07-23 13:59:58.578687] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:07.588 passed 00:18:07.847 Test: blockdev write read 8 blocks ...passed 00:18:07.847 Test: blockdev write read size > 128k ...passed 00:18:07.847 Test: blockdev write read invalid size ...passed 00:18:07.847 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:07.847 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:07.847 Test: blockdev write read max offset ...passed 00:18:07.847 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:07.847 Test: blockdev writev readv 8 blocks ...passed 00:18:07.847 Test: blockdev writev readv 30 x 1block ...passed 00:18:07.847 Test: blockdev writev readv block ...passed 00:18:08.106 Test: blockdev writev readv size > 128k ...passed 00:18:08.106 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:08.106 Test: blockdev comparev and writev ...[2024-07-23 13:59:58.889154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.889182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.889195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.889203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.889656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.889667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.889678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.889686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.890120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.890131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.890143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.890150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.890585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.890595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.890606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:08.106 [2024-07-23 13:59:58.890614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:08.106 passed 00:18:08.106 Test: blockdev nvme passthru rw ...passed 00:18:08.106 Test: blockdev nvme passthru vendor specific ...[2024-07-23 13:59:58.974804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.106 [2024-07-23 13:59:58.974820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.975138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.106 [2024-07-23 13:59:58.975147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.975463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.106 [2024-07-23 13:59:58.975471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:08.106 [2024-07-23 13:59:58.975779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:08.106 [2024-07-23 13:59:58.975788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:08.106 passed 00:18:08.106 Test: blockdev nvme admin passthru ...passed 00:18:08.106 Test: blockdev copy ...passed 00:18:08.106 00:18:08.106 Run Summary: Type Total Ran Passed Failed Inactive 00:18:08.106 suites 1 1 n/a 0 0 00:18:08.106 tests 23 23 23 0 0 00:18:08.106 asserts 152 152 152 0 n/a 00:18:08.106 00:18:08.106 Elapsed time = 1.397 seconds 00:18:08.365 13:59:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:08.365 13:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:08.365 13:59:59 -- common/autotest_common.sh@10 -- # set +x 00:18:08.365 13:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:08.365 13:59:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:08.365 13:59:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:08.365 13:59:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:08.365 13:59:59 -- nvmf/common.sh@116 -- # sync 00:18:08.365 13:59:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:08.365 13:59:59 -- nvmf/common.sh@119 -- # set +e 00:18:08.365 13:59:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:08.365 13:59:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:08.365 rmmod nvme_tcp 00:18:08.365 rmmod nvme_fabrics 00:18:08.365 rmmod nvme_keyring 00:18:08.624 13:59:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:08.624 13:59:59 -- nvmf/common.sh@123 -- # set -e 00:18:08.624 13:59:59 -- nvmf/common.sh@124 -- # return 0 00:18:08.624 13:59:59 -- nvmf/common.sh@477 -- # '[' -n 3274813 ']' 00:18:08.624 13:59:59 -- nvmf/common.sh@478 -- # killprocess 3274813 00:18:08.624 13:59:59 -- common/autotest_common.sh@926 -- # '[' -z 3274813 ']' 00:18:08.624 13:59:59 -- common/autotest_common.sh@930 -- # kill -0 3274813 00:18:08.624 13:59:59 -- common/autotest_common.sh@931 -- # uname 00:18:08.624 13:59:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:08.624 13:59:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3274813 00:18:08.624 13:59:59 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:18:08.624 13:59:59 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:18:08.624 13:59:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3274813' 00:18:08.624 killing process with pid 3274813 00:18:08.624 13:59:59 -- common/autotest_common.sh@945 -- # kill 3274813 00:18:08.624 13:59:59 -- common/autotest_common.sh@950 -- # wait 3274813 00:18:08.884 13:59:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:08.884 13:59:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:08.884 13:59:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:08.884 13:59:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:08.884 13:59:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:08.884 13:59:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.884 13:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.884 13:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.829 14:00:01 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:10.829 00:18:10.829 real 0m10.291s 00:18:10.829 user 0m14.346s 00:18:10.829 sys 0m4.733s 00:18:10.829 14:00:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:10.829 14:00:01 -- common/autotest_common.sh@10 -- # set +x 00:18:10.829 ************************************ 00:18:10.829 END TEST nvmf_bdevio_no_huge 00:18:10.829 ************************************ 00:18:11.087 14:00:01 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.087 14:00:01 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:11.087 14:00:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:11.087 14:00:01 -- common/autotest_common.sh@10 -- # set +x 00:18:11.087 ************************************ 00:18:11.087 START TEST nvmf_tls 00:18:11.087 ************************************ 00:18:11.087 14:00:01 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:11.087 * Looking for test storage... 00:18:11.087 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:11.087 14:00:01 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:11.087 14:00:01 -- nvmf/common.sh@7 -- # uname -s 00:18:11.087 14:00:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:11.087 14:00:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:11.087 14:00:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:11.087 14:00:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:11.087 14:00:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:11.087 14:00:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:11.087 14:00:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:11.087 14:00:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:11.087 14:00:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:11.087 14:00:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:11.087 14:00:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.087 14:00:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.087 14:00:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:11.087 14:00:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:11.087 14:00:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:11.087 14:00:01 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:11.087 14:00:01 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:11.087 14:00:01 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:11.087 14:00:01 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:11.087 14:00:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.087 14:00:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.087 14:00:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.087 14:00:01 -- paths/export.sh@5 -- # export PATH 00:18:11.087 14:00:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:11.087 14:00:01 -- nvmf/common.sh@46 -- # : 0 00:18:11.087 14:00:01 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:11.087 14:00:01 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:11.087 14:00:01 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:11.087 14:00:01 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:11.087 14:00:01 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:11.087 14:00:01 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:11.087 14:00:01 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:11.087 14:00:01 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:11.087 14:00:01 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:11.087 14:00:01 -- target/tls.sh@71 -- # nvmftestinit 00:18:11.087 14:00:01 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:11.087 14:00:01 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:11.087 14:00:01 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:11.087 14:00:01 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:11.087 14:00:01 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:11.087 14:00:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:11.087 14:00:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:11.087 14:00:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:11.087 14:00:01 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:11.087 14:00:01 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:11.087 14:00:01 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:11.087 14:00:01 -- common/autotest_common.sh@10 -- # set +x 00:18:16.363 14:00:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:16.363 14:00:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:16.363 14:00:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:16.363 14:00:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:16.363 14:00:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:16.363 14:00:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:16.363 14:00:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:16.363 14:00:07 -- nvmf/common.sh@294 -- # net_devs=() 00:18:16.363 14:00:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:16.363 14:00:07 -- nvmf/common.sh@295 -- # e810=() 00:18:16.363 14:00:07 -- nvmf/common.sh@295 -- # local -ga e810 00:18:16.363 14:00:07 -- nvmf/common.sh@296 -- # x722=() 00:18:16.363 14:00:07 -- nvmf/common.sh@296 -- # local -ga x722 00:18:16.363 14:00:07 -- nvmf/common.sh@297 -- # mlx=() 00:18:16.363 14:00:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:16.363 14:00:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.363 14:00:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:16.363 14:00:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:16.363 14:00:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:16.363 14:00:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:16.363 14:00:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:16.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:16.363 14:00:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:16.363 14:00:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:16.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:16.363 14:00:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:16.363 14:00:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:16.363 14:00:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.363 14:00:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:16.363 14:00:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.363 14:00:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:16.363 Found net devices under 0000:86:00.0: cvl_0_0 00:18:16.363 14:00:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.363 14:00:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:16.363 14:00:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.363 14:00:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:16.363 14:00:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.363 14:00:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:16.363 Found net devices under 0000:86:00.1: cvl_0_1 00:18:16.363 14:00:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.363 14:00:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:16.363 14:00:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:16.363 14:00:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:16.363 14:00:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:16.363 14:00:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.363 14:00:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.363 14:00:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.363 14:00:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:16.363 14:00:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.363 14:00:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.363 14:00:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:16.363 14:00:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.363 14:00:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.363 14:00:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:16.363 14:00:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:16.363 14:00:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.363 14:00:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.363 14:00:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.363 14:00:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.622 14:00:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:16.622 14:00:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:16.622 14:00:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:16.622 14:00:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.622 14:00:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:16.622 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.622 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:18:16.622 00:18:16.622 --- 10.0.0.2 ping statistics --- 00:18:16.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.622 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:18:16.622 14:00:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.622 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.622 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:18:16.622 00:18:16.622 --- 10.0.0.1 ping statistics --- 00:18:16.622 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.622 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:16.622 14:00:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.622 14:00:07 -- nvmf/common.sh@410 -- # return 0 00:18:16.622 14:00:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:16.622 14:00:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.622 14:00:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:16.622 14:00:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:16.622 14:00:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.622 14:00:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:16.622 14:00:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:16.622 14:00:07 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:16.622 14:00:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:16.622 14:00:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:16.622 14:00:07 -- common/autotest_common.sh@10 -- # set +x 00:18:16.622 14:00:07 -- nvmf/common.sh@469 -- # nvmfpid=3278961 00:18:16.622 14:00:07 -- nvmf/common.sh@470 -- # waitforlisten 3278961 00:18:16.622 14:00:07 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:16.622 14:00:07 -- common/autotest_common.sh@819 -- # '[' -z 3278961 ']' 00:18:16.622 14:00:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.622 14:00:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:16.622 14:00:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.622 14:00:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:16.622 14:00:07 -- common/autotest_common.sh@10 -- # set +x 00:18:16.622 [2024-07-23 14:00:07.567902] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:16.622 [2024-07-23 14:00:07.567944] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.622 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.622 [2024-07-23 14:00:07.625418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.881 [2024-07-23 14:00:07.706664] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:16.881 [2024-07-23 14:00:07.706781] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.881 [2024-07-23 14:00:07.706789] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.881 [2024-07-23 14:00:07.706795] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.881 [2024-07-23 14:00:07.706810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.446 14:00:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:17.446 14:00:08 -- common/autotest_common.sh@852 -- # return 0 00:18:17.446 14:00:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:17.446 14:00:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:17.446 14:00:08 -- common/autotest_common.sh@10 -- # set +x 00:18:17.446 14:00:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.446 14:00:08 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:18:17.446 14:00:08 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:17.704 true 00:18:17.704 14:00:08 -- target/tls.sh@82 -- # jq -r .tls_version 00:18:17.704 14:00:08 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.962 14:00:08 -- target/tls.sh@82 -- # version=0 00:18:17.962 14:00:08 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:18:17.962 14:00:08 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:17.962 14:00:08 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:17.962 14:00:08 -- target/tls.sh@90 -- # jq -r .tls_version 00:18:18.221 14:00:09 -- target/tls.sh@90 -- # version=13 00:18:18.221 14:00:09 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:18:18.221 14:00:09 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:18.221 14:00:09 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.221 14:00:09 -- target/tls.sh@98 -- # jq -r .tls_version 00:18:18.479 14:00:09 -- target/tls.sh@98 -- # version=7 00:18:18.479 14:00:09 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:18:18.479 14:00:09 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.479 14:00:09 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:18.738 14:00:09 -- target/tls.sh@105 -- # ktls=false 00:18:18.738 14:00:09 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:18:18.738 14:00:09 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:18.738 14:00:09 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:18.738 14:00:09 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:18.997 14:00:09 -- target/tls.sh@113 -- # ktls=true 00:18:18.997 14:00:09 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:18:18.997 14:00:09 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:19.255 14:00:10 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:19.255 14:00:10 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:18:19.255 14:00:10 -- target/tls.sh@121 -- # ktls=false 00:18:19.255 14:00:10 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:18:19.255 14:00:10 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:18:19.255 14:00:10 -- target/tls.sh@49 -- # local key hash crc 00:18:19.255 14:00:10 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:18:19.255 14:00:10 -- target/tls.sh@51 -- # hash=01 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # gzip -1 -c 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # tail -c8 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # head -c 4 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # crc='p$H�' 00:18:19.255 14:00:10 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:19.255 14:00:10 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:18:19.255 14:00:10 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.255 14:00:10 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.255 14:00:10 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:18:19.255 14:00:10 -- target/tls.sh@49 -- # local key hash crc 00:18:19.255 14:00:10 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:18:19.255 14:00:10 -- target/tls.sh@51 -- # hash=01 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # gzip -1 -c 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # tail -c8 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # head -c 4 00:18:19.255 14:00:10 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:18:19.255 14:00:10 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:19.255 14:00:10 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:18:19.255 14:00:10 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.255 14:00:10 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.255 14:00:10 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:19.255 14:00:10 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:19.255 14:00:10 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:19.255 14:00:10 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:19.255 14:00:10 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:19.256 14:00:10 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:19.256 14:00:10 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:19.517 14:00:10 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:19.777 14:00:10 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:19.777 14:00:10 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:19.777 14:00:10 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.036 [2024-07-23 14:00:10.803854] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.036 14:00:10 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:20.036 14:00:10 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:20.296 [2024-07-23 14:00:11.132711] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:20.296 [2024-07-23 14:00:11.132875] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:20.296 14:00:11 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:20.554 malloc0 00:18:20.554 14:00:11 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:20.554 14:00:11 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:20.812 14:00:11 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:20.812 EAL: No free 2048 kB hugepages reported on node 1 00:18:30.792 Initializing NVMe Controllers 00:18:30.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:30.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:30.792 Initialization complete. Launching workers. 00:18:30.792 ======================================================== 00:18:30.792 Latency(us) 00:18:30.792 Device Information : IOPS MiB/s Average min max 00:18:30.792 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17230.78 67.31 3714.64 881.42 5476.78 00:18:30.792 ======================================================== 00:18:30.792 Total : 17230.78 67.31 3714.64 881.42 5476.78 00:18:30.792 00:18:30.792 14:00:21 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:30.792 14:00:21 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:30.792 14:00:21 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:30.792 14:00:21 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:30.792 14:00:21 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:30.792 14:00:21 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:30.792 14:00:21 -- target/tls.sh@28 -- # bdevperf_pid=3281748 00:18:30.792 14:00:21 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:30.792 14:00:21 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:30.792 14:00:21 -- target/tls.sh@31 -- # waitforlisten 3281748 /var/tmp/bdevperf.sock 00:18:30.792 14:00:21 -- common/autotest_common.sh@819 -- # '[' -z 3281748 ']' 00:18:30.792 14:00:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:30.792 14:00:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:30.792 14:00:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:30.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:30.792 14:00:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:30.792 14:00:21 -- common/autotest_common.sh@10 -- # set +x 00:18:30.792 [2024-07-23 14:00:21.803457] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:30.792 [2024-07-23 14:00:21.803511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281748 ] 00:18:31.053 EAL: No free 2048 kB hugepages reported on node 1 00:18:31.053 [2024-07-23 14:00:21.853904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.053 [2024-07-23 14:00:21.927998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.619 14:00:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:31.619 14:00:22 -- common/autotest_common.sh@852 -- # return 0 00:18:31.619 14:00:22 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:31.877 [2024-07-23 14:00:22.755188] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:31.877 TLSTESTn1 00:18:31.877 14:00:22 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:32.135 Running I/O for 10 seconds... 00:18:42.135 00:18:42.135 Latency(us) 00:18:42.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.135 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:42.135 Verification LBA range: start 0x0 length 0x2000 00:18:42.135 TLSTESTn1 : 10.04 1665.43 6.51 0.00 0.00 76734.90 11568.53 108504.82 00:18:42.135 =================================================================================================================== 00:18:42.135 Total : 1665.43 6.51 0.00 0.00 76734.90 11568.53 108504.82 00:18:42.135 0 00:18:42.135 14:00:33 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:42.135 14:00:33 -- target/tls.sh@45 -- # killprocess 3281748 00:18:42.135 14:00:33 -- common/autotest_common.sh@926 -- # '[' -z 3281748 ']' 00:18:42.135 14:00:33 -- common/autotest_common.sh@930 -- # kill -0 3281748 00:18:42.135 14:00:33 -- common/autotest_common.sh@931 -- # uname 00:18:42.135 14:00:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:42.135 14:00:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3281748 00:18:42.135 14:00:33 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:42.135 14:00:33 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:42.135 14:00:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3281748' 00:18:42.135 killing process with pid 3281748 00:18:42.135 14:00:33 -- common/autotest_common.sh@945 -- # kill 3281748 00:18:42.135 Received shutdown signal, test time was about 10.000000 seconds 00:18:42.135 00:18:42.135 Latency(us) 00:18:42.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:42.135 =================================================================================================================== 00:18:42.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:42.135 14:00:33 -- common/autotest_common.sh@950 -- # wait 3281748 00:18:42.394 14:00:33 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:42.394 14:00:33 -- common/autotest_common.sh@640 -- # local es=0 00:18:42.394 14:00:33 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:42.394 14:00:33 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:42.394 14:00:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.394 14:00:33 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:42.394 14:00:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:42.394 14:00:33 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:42.394 14:00:33 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:42.394 14:00:33 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:42.394 14:00:33 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:42.394 14:00:33 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:18:42.394 14:00:33 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:42.394 14:00:33 -- target/tls.sh@28 -- # bdevperf_pid=3283621 00:18:42.394 14:00:33 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:42.394 14:00:33 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:42.394 14:00:33 -- target/tls.sh@31 -- # waitforlisten 3283621 /var/tmp/bdevperf.sock 00:18:42.394 14:00:33 -- common/autotest_common.sh@819 -- # '[' -z 3283621 ']' 00:18:42.394 14:00:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:42.394 14:00:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.394 14:00:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:42.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:42.394 14:00:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.394 14:00:33 -- common/autotest_common.sh@10 -- # set +x 00:18:42.394 [2024-07-23 14:00:33.303400] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:42.395 [2024-07-23 14:00:33.303449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283621 ] 00:18:42.395 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.395 [2024-07-23 14:00:33.354766] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:42.655 [2024-07-23 14:00:33.426218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:43.220 14:00:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.220 14:00:34 -- common/autotest_common.sh@852 -- # return 0 00:18:43.220 14:00:34 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:18:43.478 [2024-07-23 14:00:34.243639] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:43.478 [2024-07-23 14:00:34.252524] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:43.478 [2024-07-23 14:00:34.253216] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde0c0 (107): Transport endpoint is not connected 00:18:43.478 [2024-07-23 14:00:34.254209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfde0c0 (9): Bad file descriptor 00:18:43.478 [2024-07-23 14:00:34.255211] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:43.478 [2024-07-23 14:00:34.255220] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:43.478 [2024-07-23 14:00:34.255230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:43.478 request: 00:18:43.478 { 00:18:43.478 "name": "TLSTEST", 00:18:43.478 "trtype": "tcp", 00:18:43.478 "traddr": "10.0.0.2", 00:18:43.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:43.478 "adrfam": "ipv4", 00:18:43.478 "trsvcid": "4420", 00:18:43.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:43.478 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:18:43.478 "method": "bdev_nvme_attach_controller", 00:18:43.478 "req_id": 1 00:18:43.478 } 00:18:43.478 Got JSON-RPC error response 00:18:43.478 response: 00:18:43.478 { 00:18:43.478 "code": -32602, 00:18:43.478 "message": "Invalid parameters" 00:18:43.478 } 00:18:43.478 14:00:34 -- target/tls.sh@36 -- # killprocess 3283621 00:18:43.478 14:00:34 -- common/autotest_common.sh@926 -- # '[' -z 3283621 ']' 00:18:43.478 14:00:34 -- common/autotest_common.sh@930 -- # kill -0 3283621 00:18:43.478 14:00:34 -- common/autotest_common.sh@931 -- # uname 00:18:43.478 14:00:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:43.478 14:00:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283621 00:18:43.478 14:00:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:43.478 14:00:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:43.478 14:00:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283621' 00:18:43.478 killing process with pid 3283621 00:18:43.478 14:00:34 -- common/autotest_common.sh@945 -- # kill 3283621 00:18:43.478 Received shutdown signal, test time was about 10.000000 seconds 00:18:43.478 00:18:43.478 Latency(us) 00:18:43.478 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.478 =================================================================================================================== 00:18:43.478 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:43.478 14:00:34 -- common/autotest_common.sh@950 -- # wait 3283621 00:18:43.736 14:00:34 -- target/tls.sh@37 -- # return 1 00:18:43.736 14:00:34 -- common/autotest_common.sh@643 -- # es=1 00:18:43.736 14:00:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:43.736 14:00:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:43.736 14:00:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:43.736 14:00:34 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:43.737 14:00:34 -- common/autotest_common.sh@640 -- # local es=0 00:18:43.737 14:00:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:43.737 14:00:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:43.737 14:00:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.737 14:00:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:43.737 14:00:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:43.737 14:00:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:43.737 14:00:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:43.737 14:00:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:43.737 14:00:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:43.737 14:00:34 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:43.737 14:00:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:43.737 14:00:34 -- target/tls.sh@28 -- # bdevperf_pid=3283870 00:18:43.737 14:00:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:43.737 14:00:34 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:43.737 14:00:34 -- target/tls.sh@31 -- # waitforlisten 3283870 /var/tmp/bdevperf.sock 00:18:43.737 14:00:34 -- common/autotest_common.sh@819 -- # '[' -z 3283870 ']' 00:18:43.737 14:00:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:43.737 14:00:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:43.737 14:00:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:43.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:43.737 14:00:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:43.737 14:00:34 -- common/autotest_common.sh@10 -- # set +x 00:18:43.737 [2024-07-23 14:00:34.561321] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:43.737 [2024-07-23 14:00:34.561369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283870 ] 00:18:43.737 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.737 [2024-07-23 14:00:34.611059] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.737 [2024-07-23 14:00:34.686853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.672 14:00:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:44.672 14:00:35 -- common/autotest_common.sh@852 -- # return 0 00:18:44.672 14:00:35 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:44.672 [2024-07-23 14:00:35.512543] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.672 [2024-07-23 14:00:35.517424] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.672 [2024-07-23 14:00:35.517445] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:44.672 [2024-07-23 14:00:35.517469] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:44.672 [2024-07-23 14:00:35.518113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6130c0 (107): Transport endpoint is not connected 00:18:44.672 [2024-07-23 14:00:35.519105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6130c0 (9): Bad file descriptor 00:18:44.672 [2024-07-23 14:00:35.520106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:44.672 [2024-07-23 14:00:35.520116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:44.672 [2024-07-23 14:00:35.520123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:44.672 request: 00:18:44.672 { 00:18:44.672 "name": "TLSTEST", 00:18:44.672 "trtype": "tcp", 00:18:44.672 "traddr": "10.0.0.2", 00:18:44.672 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:44.672 "adrfam": "ipv4", 00:18:44.672 "trsvcid": "4420", 00:18:44.672 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.672 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:18:44.672 "method": "bdev_nvme_attach_controller", 00:18:44.672 "req_id": 1 00:18:44.672 } 00:18:44.672 Got JSON-RPC error response 00:18:44.672 response: 00:18:44.672 { 00:18:44.672 "code": -32602, 00:18:44.672 "message": "Invalid parameters" 00:18:44.672 } 00:18:44.672 14:00:35 -- target/tls.sh@36 -- # killprocess 3283870 00:18:44.672 14:00:35 -- common/autotest_common.sh@926 -- # '[' -z 3283870 ']' 00:18:44.672 14:00:35 -- common/autotest_common.sh@930 -- # kill -0 3283870 00:18:44.672 14:00:35 -- common/autotest_common.sh@931 -- # uname 00:18:44.672 14:00:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:44.672 14:00:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3283870 00:18:44.672 14:00:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:44.672 14:00:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:44.672 14:00:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3283870' 00:18:44.672 killing process with pid 3283870 00:18:44.672 14:00:35 -- common/autotest_common.sh@945 -- # kill 3283870 00:18:44.672 Received shutdown signal, test time was about 10.000000 seconds 00:18:44.672 00:18:44.673 Latency(us) 00:18:44.673 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.673 =================================================================================================================== 00:18:44.673 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:44.673 14:00:35 -- common/autotest_common.sh@950 -- # wait 3283870 00:18:44.932 14:00:35 -- target/tls.sh@37 -- # return 1 00:18:44.932 14:00:35 -- common/autotest_common.sh@643 -- # es=1 00:18:44.932 14:00:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:44.932 14:00:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:44.932 14:00:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:44.932 14:00:35 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:44.932 14:00:35 -- common/autotest_common.sh@640 -- # local es=0 00:18:44.932 14:00:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:44.932 14:00:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:44.932 14:00:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:44.932 14:00:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:44.932 14:00:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:44.932 14:00:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:44.932 14:00:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:44.932 14:00:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:44.932 14:00:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:44.932 14:00:35 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:18:44.932 14:00:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:44.932 14:00:35 -- target/tls.sh@28 -- # bdevperf_pid=3284108 00:18:44.932 14:00:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.932 14:00:35 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:44.932 14:00:35 -- target/tls.sh@31 -- # waitforlisten 3284108 /var/tmp/bdevperf.sock 00:18:44.932 14:00:35 -- common/autotest_common.sh@819 -- # '[' -z 3284108 ']' 00:18:44.932 14:00:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.932 14:00:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:44.932 14:00:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.932 14:00:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:44.932 14:00:35 -- common/autotest_common.sh@10 -- # set +x 00:18:44.932 [2024-07-23 14:00:35.828634] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:44.932 [2024-07-23 14:00:35.828679] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284108 ] 00:18:44.932 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.932 [2024-07-23 14:00:35.877846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.932 [2024-07-23 14:00:35.942771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.867 14:00:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:45.867 14:00:36 -- common/autotest_common.sh@852 -- # return 0 00:18:45.867 14:00:36 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:18:45.867 [2024-07-23 14:00:36.777199] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:45.867 [2024-07-23 14:00:36.788901] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.867 [2024-07-23 14:00:36.788921] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:45.867 [2024-07-23 14:00:36.788944] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:45.867 [2024-07-23 14:00:36.789718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d0c0 (107): Transport endpoint is not connected 00:18:45.867 [2024-07-23 14:00:36.790711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74d0c0 (9): Bad file descriptor 00:18:45.867 [2024-07-23 14:00:36.791712] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:45.867 [2024-07-23 14:00:36.791722] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:45.867 [2024-07-23 14:00:36.791729] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:45.867 request: 00:18:45.867 { 00:18:45.867 "name": "TLSTEST", 00:18:45.867 "trtype": "tcp", 00:18:45.867 "traddr": "10.0.0.2", 00:18:45.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:45.867 "adrfam": "ipv4", 00:18:45.867 "trsvcid": "4420", 00:18:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:45.867 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:18:45.867 "method": "bdev_nvme_attach_controller", 00:18:45.867 "req_id": 1 00:18:45.867 } 00:18:45.867 Got JSON-RPC error response 00:18:45.867 response: 00:18:45.867 { 00:18:45.867 "code": -32602, 00:18:45.867 "message": "Invalid parameters" 00:18:45.867 } 00:18:45.867 14:00:36 -- target/tls.sh@36 -- # killprocess 3284108 00:18:45.867 14:00:36 -- common/autotest_common.sh@926 -- # '[' -z 3284108 ']' 00:18:45.867 14:00:36 -- common/autotest_common.sh@930 -- # kill -0 3284108 00:18:45.867 14:00:36 -- common/autotest_common.sh@931 -- # uname 00:18:45.867 14:00:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:45.867 14:00:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284108 00:18:45.867 14:00:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:45.867 14:00:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:45.867 14:00:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284108' 00:18:45.867 killing process with pid 3284108 00:18:45.867 14:00:36 -- common/autotest_common.sh@945 -- # kill 3284108 00:18:45.867 Received shutdown signal, test time was about 10.000000 seconds 00:18:45.867 00:18:45.867 Latency(us) 00:18:45.867 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.867 =================================================================================================================== 00:18:45.868 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:45.868 14:00:36 -- common/autotest_common.sh@950 -- # wait 3284108 00:18:46.127 14:00:37 -- target/tls.sh@37 -- # return 1 00:18:46.127 14:00:37 -- common/autotest_common.sh@643 -- # es=1 00:18:46.127 14:00:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:46.127 14:00:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:46.127 14:00:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:46.127 14:00:37 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.127 14:00:37 -- common/autotest_common.sh@640 -- # local es=0 00:18:46.127 14:00:37 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.127 14:00:37 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:18:46.127 14:00:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:46.127 14:00:37 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:18:46.127 14:00:37 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:18:46.127 14:00:37 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:46.127 14:00:37 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:46.127 14:00:37 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:46.127 14:00:37 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:46.127 14:00:37 -- target/tls.sh@23 -- # psk= 00:18:46.127 14:00:37 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:46.127 14:00:37 -- target/tls.sh@28 -- # bdevperf_pid=3284347 00:18:46.127 14:00:37 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:46.127 14:00:37 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:46.127 14:00:37 -- target/tls.sh@31 -- # waitforlisten 3284347 /var/tmp/bdevperf.sock 00:18:46.127 14:00:37 -- common/autotest_common.sh@819 -- # '[' -z 3284347 ']' 00:18:46.127 14:00:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:46.127 14:00:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:46.127 14:00:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:46.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:46.127 14:00:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:46.127 14:00:37 -- common/autotest_common.sh@10 -- # set +x 00:18:46.127 [2024-07-23 14:00:37.100486] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:46.127 [2024-07-23 14:00:37.100534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284347 ] 00:18:46.127 EAL: No free 2048 kB hugepages reported on node 1 00:18:46.386 [2024-07-23 14:00:37.149826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.386 [2024-07-23 14:00:37.215553] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:46.952 14:00:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:46.952 14:00:37 -- common/autotest_common.sh@852 -- # return 0 00:18:46.952 14:00:37 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:47.210 [2024-07-23 14:00:38.062185] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:47.210 [2024-07-23 14:00:38.064419] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x177c740 (9): Bad file descriptor 00:18:47.210 [2024-07-23 14:00:38.065417] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:47.210 [2024-07-23 14:00:38.065427] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:47.210 [2024-07-23 14:00:38.065435] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:47.210 request: 00:18:47.210 { 00:18:47.210 "name": "TLSTEST", 00:18:47.210 "trtype": "tcp", 00:18:47.210 "traddr": "10.0.0.2", 00:18:47.210 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:47.210 "adrfam": "ipv4", 00:18:47.210 "trsvcid": "4420", 00:18:47.210 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.210 "method": "bdev_nvme_attach_controller", 00:18:47.210 "req_id": 1 00:18:47.210 } 00:18:47.210 Got JSON-RPC error response 00:18:47.210 response: 00:18:47.210 { 00:18:47.210 "code": -32602, 00:18:47.210 "message": "Invalid parameters" 00:18:47.210 } 00:18:47.210 14:00:38 -- target/tls.sh@36 -- # killprocess 3284347 00:18:47.210 14:00:38 -- common/autotest_common.sh@926 -- # '[' -z 3284347 ']' 00:18:47.210 14:00:38 -- common/autotest_common.sh@930 -- # kill -0 3284347 00:18:47.210 14:00:38 -- common/autotest_common.sh@931 -- # uname 00:18:47.210 14:00:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:47.210 14:00:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284347 00:18:47.210 14:00:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:47.210 14:00:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:47.210 14:00:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284347' 00:18:47.210 killing process with pid 3284347 00:18:47.210 14:00:38 -- common/autotest_common.sh@945 -- # kill 3284347 00:18:47.210 Received shutdown signal, test time was about 10.000000 seconds 00:18:47.210 00:18:47.210 Latency(us) 00:18:47.210 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:47.210 =================================================================================================================== 00:18:47.210 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:47.210 14:00:38 -- common/autotest_common.sh@950 -- # wait 3284347 00:18:47.469 14:00:38 -- target/tls.sh@37 -- # return 1 00:18:47.469 14:00:38 -- common/autotest_common.sh@643 -- # es=1 00:18:47.469 14:00:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:18:47.469 14:00:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:18:47.469 14:00:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:18:47.469 14:00:38 -- target/tls.sh@167 -- # killprocess 3278961 00:18:47.469 14:00:38 -- common/autotest_common.sh@926 -- # '[' -z 3278961 ']' 00:18:47.469 14:00:38 -- common/autotest_common.sh@930 -- # kill -0 3278961 00:18:47.469 14:00:38 -- common/autotest_common.sh@931 -- # uname 00:18:47.469 14:00:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:47.469 14:00:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3278961 00:18:47.469 14:00:38 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:47.469 14:00:38 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:47.469 14:00:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3278961' 00:18:47.469 killing process with pid 3278961 00:18:47.469 14:00:38 -- common/autotest_common.sh@945 -- # kill 3278961 00:18:47.469 14:00:38 -- common/autotest_common.sh@950 -- # wait 3278961 00:18:47.728 14:00:38 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:18:47.728 14:00:38 -- target/tls.sh@49 -- # local key hash crc 00:18:47.728 14:00:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:47.728 14:00:38 -- target/tls.sh@51 -- # hash=02 00:18:47.728 14:00:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:18:47.728 14:00:38 -- target/tls.sh@52 -- # gzip -1 -c 00:18:47.728 14:00:38 -- target/tls.sh@52 -- # tail -c8 00:18:47.728 14:00:38 -- target/tls.sh@52 -- # head -c 4 00:18:47.728 14:00:38 -- target/tls.sh@52 -- # crc='�e�'\''' 00:18:47.728 14:00:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:18:47.728 14:00:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:18:47.728 14:00:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.728 14:00:38 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.728 14:00:38 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:47.728 14:00:38 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:47.728 14:00:38 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:47.728 14:00:38 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:18:47.728 14:00:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:47.728 14:00:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:47.728 14:00:38 -- common/autotest_common.sh@10 -- # set +x 00:18:47.728 14:00:38 -- nvmf/common.sh@469 -- # nvmfpid=3284611 00:18:47.728 14:00:38 -- nvmf/common.sh@470 -- # waitforlisten 3284611 00:18:47.728 14:00:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:47.728 14:00:38 -- common/autotest_common.sh@819 -- # '[' -z 3284611 ']' 00:18:47.728 14:00:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.728 14:00:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:47.728 14:00:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.728 14:00:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:47.728 14:00:38 -- common/autotest_common.sh@10 -- # set +x 00:18:47.728 [2024-07-23 14:00:38.650658] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:47.728 [2024-07-23 14:00:38.650700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.728 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.728 [2024-07-23 14:00:38.706743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.988 [2024-07-23 14:00:38.784008] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:47.988 [2024-07-23 14:00:38.784119] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.988 [2024-07-23 14:00:38.784127] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.988 [2024-07-23 14:00:38.784134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.988 [2024-07-23 14:00:38.784152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.556 14:00:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:48.556 14:00:39 -- common/autotest_common.sh@852 -- # return 0 00:18:48.556 14:00:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:48.556 14:00:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:48.556 14:00:39 -- common/autotest_common.sh@10 -- # set +x 00:18:48.556 14:00:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.556 14:00:39 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:48.556 14:00:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:48.556 14:00:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:48.813 [2024-07-23 14:00:39.622283] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.813 14:00:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:48.813 14:00:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.071 [2024-07-23 14:00:39.931106] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.071 [2024-07-23 14:00:39.931279] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.071 14:00:39 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.330 malloc0 00:18:49.330 14:00:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:49.330 14:00:40 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:49.590 14:00:40 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:49.590 14:00:40 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.590 14:00:40 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:49.590 14:00:40 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.590 14:00:40 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:18:49.590 14:00:40 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.590 14:00:40 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.590 14:00:40 -- target/tls.sh@28 -- # bdevperf_pid=3284876 00:18:49.590 14:00:40 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.590 14:00:40 -- target/tls.sh@31 -- # waitforlisten 3284876 /var/tmp/bdevperf.sock 00:18:49.590 14:00:40 -- common/autotest_common.sh@819 -- # '[' -z 3284876 ']' 00:18:49.590 14:00:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.590 14:00:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:49.590 14:00:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.590 14:00:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:49.590 14:00:40 -- common/autotest_common.sh@10 -- # set +x 00:18:49.590 [2024-07-23 14:00:40.468438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:18:49.590 [2024-07-23 14:00:40.468485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284876 ] 00:18:49.590 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.590 [2024-07-23 14:00:40.517072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.590 [2024-07-23 14:00:40.593762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.524 14:00:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:50.524 14:00:41 -- common/autotest_common.sh@852 -- # return 0 00:18:50.524 14:00:41 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:18:50.524 [2024-07-23 14:00:41.436683] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.524 TLSTESTn1 00:18:50.781 14:00:41 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:50.781 Running I/O for 10 seconds... 00:19:00.751 00:19:00.751 Latency(us) 00:19:00.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.751 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:00.751 Verification LBA range: start 0x0 length 0x2000 00:19:00.751 TLSTESTn1 : 10.04 1666.50 6.51 0.00 0.00 76700.14 11112.63 108504.82 00:19:00.751 =================================================================================================================== 00:19:00.751 Total : 1666.50 6.51 0.00 0.00 76700.14 11112.63 108504.82 00:19:00.751 0 00:19:00.751 14:00:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:00.751 14:00:51 -- target/tls.sh@45 -- # killprocess 3284876 00:19:00.751 14:00:51 -- common/autotest_common.sh@926 -- # '[' -z 3284876 ']' 00:19:00.751 14:00:51 -- common/autotest_common.sh@930 -- # kill -0 3284876 00:19:00.751 14:00:51 -- common/autotest_common.sh@931 -- # uname 00:19:00.751 14:00:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:00.751 14:00:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284876 00:19:00.751 14:00:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:00.751 14:00:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:00.751 14:00:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284876' 00:19:00.751 killing process with pid 3284876 00:19:00.751 14:00:51 -- common/autotest_common.sh@945 -- # kill 3284876 00:19:00.751 Received shutdown signal, test time was about 10.000000 seconds 00:19:00.751 00:19:00.751 Latency(us) 00:19:00.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:00.751 =================================================================================================================== 00:19:00.751 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:00.751 14:00:51 -- common/autotest_common.sh@950 -- # wait 3284876 00:19:01.009 14:00:51 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:01.009 14:00:51 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:01.009 14:00:51 -- common/autotest_common.sh@640 -- # local es=0 00:19:01.009 14:00:51 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:01.009 14:00:51 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:19:01.009 14:00:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.009 14:00:51 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:19:01.009 14:00:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:01.009 14:00:51 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:01.009 14:00:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:01.009 14:00:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:01.009 14:00:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:01.009 14:00:51 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:19:01.009 14:00:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:01.009 14:00:51 -- target/tls.sh@28 -- # bdevperf_pid=3286825 00:19:01.009 14:00:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:01.009 14:00:51 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:01.009 14:00:51 -- target/tls.sh@31 -- # waitforlisten 3286825 /var/tmp/bdevperf.sock 00:19:01.010 14:00:51 -- common/autotest_common.sh@819 -- # '[' -z 3286825 ']' 00:19:01.010 14:00:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:01.010 14:00:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:01.010 14:00:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:01.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:01.010 14:00:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:01.010 14:00:51 -- common/autotest_common.sh@10 -- # set +x 00:19:01.010 [2024-07-23 14:00:52.007728] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:01.010 [2024-07-23 14:00:52.007778] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286825 ] 00:19:01.268 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.268 [2024-07-23 14:00:52.058946] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.268 [2024-07-23 14:00:52.124572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.836 14:00:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:01.836 14:00:52 -- common/autotest_common.sh@852 -- # return 0 00:19:01.836 14:00:52 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:02.095 [2024-07-23 14:00:52.950551] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.095 [2024-07-23 14:00:52.950587] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:02.095 request: 00:19:02.095 { 00:19:02.095 "name": "TLSTEST", 00:19:02.095 "trtype": "tcp", 00:19:02.095 "traddr": "10.0.0.2", 00:19:02.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:02.095 "adrfam": "ipv4", 00:19:02.095 "trsvcid": "4420", 00:19:02.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:02.095 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:02.095 "method": "bdev_nvme_attach_controller", 00:19:02.095 "req_id": 1 00:19:02.095 } 00:19:02.095 Got JSON-RPC error response 00:19:02.095 response: 00:19:02.095 { 00:19:02.095 "code": -22, 00:19:02.095 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:02.095 } 00:19:02.095 14:00:52 -- target/tls.sh@36 -- # killprocess 3286825 00:19:02.095 14:00:52 -- common/autotest_common.sh@926 -- # '[' -z 3286825 ']' 00:19:02.095 14:00:52 -- common/autotest_common.sh@930 -- # kill -0 3286825 00:19:02.095 14:00:52 -- common/autotest_common.sh@931 -- # uname 00:19:02.095 14:00:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:02.095 14:00:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3286825 00:19:02.095 14:00:53 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:02.095 14:00:53 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:02.095 14:00:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3286825' 00:19:02.095 killing process with pid 3286825 00:19:02.095 14:00:53 -- common/autotest_common.sh@945 -- # kill 3286825 00:19:02.095 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.095 00:19:02.095 Latency(us) 00:19:02.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.095 =================================================================================================================== 00:19:02.095 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:02.095 14:00:53 -- common/autotest_common.sh@950 -- # wait 3286825 00:19:02.354 14:00:53 -- target/tls.sh@37 -- # return 1 00:19:02.354 14:00:53 -- common/autotest_common.sh@643 -- # es=1 00:19:02.354 14:00:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:02.354 14:00:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:02.354 14:00:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:02.354 14:00:53 -- target/tls.sh@183 -- # killprocess 3284611 00:19:02.354 14:00:53 -- common/autotest_common.sh@926 -- # '[' -z 3284611 ']' 00:19:02.354 14:00:53 -- common/autotest_common.sh@930 -- # kill -0 3284611 00:19:02.354 14:00:53 -- common/autotest_common.sh@931 -- # uname 00:19:02.354 14:00:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:02.354 14:00:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3284611 00:19:02.354 14:00:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:02.354 14:00:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:02.354 14:00:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3284611' 00:19:02.354 killing process with pid 3284611 00:19:02.354 14:00:53 -- common/autotest_common.sh@945 -- # kill 3284611 00:19:02.354 14:00:53 -- common/autotest_common.sh@950 -- # wait 3284611 00:19:02.619 14:00:53 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:02.619 14:00:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:02.619 14:00:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:02.619 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 14:00:53 -- nvmf/common.sh@469 -- # nvmfpid=3287123 00:19:02.619 14:00:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:02.619 14:00:53 -- nvmf/common.sh@470 -- # waitforlisten 3287123 00:19:02.619 14:00:53 -- common/autotest_common.sh@819 -- # '[' -z 3287123 ']' 00:19:02.619 14:00:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.619 14:00:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:02.619 14:00:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.619 14:00:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:02.619 14:00:53 -- common/autotest_common.sh@10 -- # set +x 00:19:02.619 [2024-07-23 14:00:53.525412] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:02.619 [2024-07-23 14:00:53.525462] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.619 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.619 [2024-07-23 14:00:53.582538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.888 [2024-07-23 14:00:53.655007] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:02.888 [2024-07-23 14:00:53.655122] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.888 [2024-07-23 14:00:53.655130] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.888 [2024-07-23 14:00:53.655137] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.888 [2024-07-23 14:00:53.655152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.453 14:00:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:03.453 14:00:54 -- common/autotest_common.sh@852 -- # return 0 00:19:03.453 14:00:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:03.453 14:00:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:03.453 14:00:54 -- common/autotest_common.sh@10 -- # set +x 00:19:03.453 14:00:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.453 14:00:54 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:03.453 14:00:54 -- common/autotest_common.sh@640 -- # local es=0 00:19:03.453 14:00:54 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:03.453 14:00:54 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:19:03.453 14:00:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.453 14:00:54 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:19:03.453 14:00:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:03.453 14:00:54 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:03.453 14:00:54 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:03.453 14:00:54 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:03.711 [2024-07-23 14:00:54.498466] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:03.711 14:00:54 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:03.711 14:00:54 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:03.970 [2024-07-23 14:00:54.815275] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:03.970 [2024-07-23 14:00:54.815465] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:03.970 14:00:54 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:04.228 malloc0 00:19:04.228 14:00:55 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:04.228 14:00:55 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:04.488 [2024-07-23 14:00:55.304760] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:04.488 [2024-07-23 14:00:55.304787] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:04.488 [2024-07-23 14:00:55.304801] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:04.488 request: 00:19:04.488 { 00:19:04.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:04.488 "host": "nqn.2016-06.io.spdk:host1", 00:19:04.488 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:04.488 "method": "nvmf_subsystem_add_host", 00:19:04.488 "req_id": 1 00:19:04.488 } 00:19:04.488 Got JSON-RPC error response 00:19:04.488 response: 00:19:04.488 { 00:19:04.488 "code": -32603, 00:19:04.488 "message": "Internal error" 00:19:04.488 } 00:19:04.488 14:00:55 -- common/autotest_common.sh@643 -- # es=1 00:19:04.488 14:00:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:04.488 14:00:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:04.488 14:00:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:04.488 14:00:55 -- target/tls.sh@189 -- # killprocess 3287123 00:19:04.488 14:00:55 -- common/autotest_common.sh@926 -- # '[' -z 3287123 ']' 00:19:04.488 14:00:55 -- common/autotest_common.sh@930 -- # kill -0 3287123 00:19:04.488 14:00:55 -- common/autotest_common.sh@931 -- # uname 00:19:04.488 14:00:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:04.488 14:00:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3287123 00:19:04.488 14:00:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:04.488 14:00:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:04.488 14:00:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3287123' 00:19:04.488 killing process with pid 3287123 00:19:04.488 14:00:55 -- common/autotest_common.sh@945 -- # kill 3287123 00:19:04.488 14:00:55 -- common/autotest_common.sh@950 -- # wait 3287123 00:19:04.747 14:00:55 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:04.747 14:00:55 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:19:04.747 14:00:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:04.747 14:00:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:04.747 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:19:04.747 14:00:55 -- nvmf/common.sh@469 -- # nvmfpid=3287484 00:19:04.747 14:00:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.747 14:00:55 -- nvmf/common.sh@470 -- # waitforlisten 3287484 00:19:04.747 14:00:55 -- common/autotest_common.sh@819 -- # '[' -z 3287484 ']' 00:19:04.747 14:00:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.747 14:00:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:04.747 14:00:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.747 14:00:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:04.747 14:00:55 -- common/autotest_common.sh@10 -- # set +x 00:19:04.747 [2024-07-23 14:00:55.632580] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:04.747 [2024-07-23 14:00:55.632624] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.747 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.747 [2024-07-23 14:00:55.688774] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.006 [2024-07-23 14:00:55.765818] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:05.006 [2024-07-23 14:00:55.765921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:05.006 [2024-07-23 14:00:55.765929] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:05.006 [2024-07-23 14:00:55.765935] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:05.006 [2024-07-23 14:00:55.765949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:05.576 14:00:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:05.576 14:00:56 -- common/autotest_common.sh@852 -- # return 0 00:19:05.576 14:00:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:05.576 14:00:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:05.576 14:00:56 -- common/autotest_common.sh@10 -- # set +x 00:19:05.576 14:00:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:05.576 14:00:56 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:05.576 14:00:56 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:05.576 14:00:56 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:05.835 [2024-07-23 14:00:56.596958] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:05.835 14:00:56 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:05.835 14:00:56 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:06.095 [2024-07-23 14:00:56.913784] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:06.095 [2024-07-23 14:00:56.913972] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.095 14:00:56 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:06.095 malloc0 00:19:06.095 14:00:57 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:06.354 14:00:57 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:06.618 14:00:57 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:06.618 14:00:57 -- target/tls.sh@197 -- # bdevperf_pid=3287751 00:19:06.618 14:00:57 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:06.618 14:00:57 -- target/tls.sh@200 -- # waitforlisten 3287751 /var/tmp/bdevperf.sock 00:19:06.618 14:00:57 -- common/autotest_common.sh@819 -- # '[' -z 3287751 ']' 00:19:06.618 14:00:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:06.618 14:00:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:06.618 14:00:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:06.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:06.618 14:00:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:06.618 14:00:57 -- common/autotest_common.sh@10 -- # set +x 00:19:06.618 [2024-07-23 14:00:57.461794] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:06.618 [2024-07-23 14:00:57.461838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3287751 ] 00:19:06.618 EAL: No free 2048 kB hugepages reported on node 1 00:19:06.618 [2024-07-23 14:00:57.511985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.618 [2024-07-23 14:00:57.588320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.557 14:00:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:07.557 14:00:58 -- common/autotest_common.sh@852 -- # return 0 00:19:07.557 14:00:58 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:07.557 [2024-07-23 14:00:58.419814] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:07.557 TLSTESTn1 00:19:07.557 14:00:58 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:07.815 14:00:58 -- target/tls.sh@205 -- # tgtconf='{ 00:19:07.815 "subsystems": [ 00:19:07.815 { 00:19:07.815 "subsystem": "iobuf", 00:19:07.815 "config": [ 00:19:07.815 { 00:19:07.815 "method": "iobuf_set_options", 00:19:07.815 "params": { 00:19:07.815 "small_pool_count": 8192, 00:19:07.815 "large_pool_count": 1024, 00:19:07.815 "small_bufsize": 8192, 00:19:07.815 "large_bufsize": 135168 00:19:07.815 } 00:19:07.815 } 00:19:07.815 ] 00:19:07.815 }, 00:19:07.815 { 00:19:07.815 "subsystem": "sock", 00:19:07.815 "config": [ 00:19:07.815 { 00:19:07.815 "method": "sock_impl_set_options", 00:19:07.815 "params": { 00:19:07.815 "impl_name": "posix", 00:19:07.815 "recv_buf_size": 2097152, 00:19:07.815 "send_buf_size": 2097152, 00:19:07.815 "enable_recv_pipe": true, 00:19:07.815 "enable_quickack": false, 00:19:07.815 "enable_placement_id": 0, 00:19:07.815 "enable_zerocopy_send_server": true, 00:19:07.815 "enable_zerocopy_send_client": false, 00:19:07.815 "zerocopy_threshold": 0, 00:19:07.815 "tls_version": 0, 00:19:07.815 "enable_ktls": false 00:19:07.815 } 00:19:07.815 }, 00:19:07.815 { 00:19:07.815 "method": "sock_impl_set_options", 00:19:07.815 "params": { 00:19:07.815 "impl_name": "ssl", 00:19:07.815 "recv_buf_size": 4096, 00:19:07.815 "send_buf_size": 4096, 00:19:07.815 "enable_recv_pipe": true, 00:19:07.815 "enable_quickack": false, 00:19:07.815 "enable_placement_id": 0, 00:19:07.815 "enable_zerocopy_send_server": true, 00:19:07.815 "enable_zerocopy_send_client": false, 00:19:07.815 "zerocopy_threshold": 0, 00:19:07.815 "tls_version": 0, 00:19:07.815 "enable_ktls": false 00:19:07.815 } 00:19:07.815 } 00:19:07.815 ] 00:19:07.815 }, 00:19:07.815 { 00:19:07.815 "subsystem": "vmd", 00:19:07.815 "config": [] 00:19:07.815 }, 00:19:07.815 { 00:19:07.815 "subsystem": "accel", 00:19:07.815 "config": [ 00:19:07.815 { 00:19:07.815 "method": "accel_set_options", 00:19:07.815 "params": { 00:19:07.815 "small_cache_size": 128, 00:19:07.816 "large_cache_size": 16, 00:19:07.816 "task_count": 2048, 00:19:07.816 "sequence_count": 2048, 00:19:07.816 "buf_count": 2048 00:19:07.816 } 00:19:07.816 } 00:19:07.816 ] 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "subsystem": "bdev", 00:19:07.816 "config": [ 00:19:07.816 { 00:19:07.816 "method": "bdev_set_options", 00:19:07.816 "params": { 00:19:07.816 "bdev_io_pool_size": 65535, 00:19:07.816 "bdev_io_cache_size": 256, 00:19:07.816 "bdev_auto_examine": true, 00:19:07.816 "iobuf_small_cache_size": 128, 00:19:07.816 "iobuf_large_cache_size": 16 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "bdev_raid_set_options", 00:19:07.816 "params": { 00:19:07.816 "process_window_size_kb": 1024 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "bdev_iscsi_set_options", 00:19:07.816 "params": { 00:19:07.816 "timeout_sec": 30 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "bdev_nvme_set_options", 00:19:07.816 "params": { 00:19:07.816 "action_on_timeout": "none", 00:19:07.816 "timeout_us": 0, 00:19:07.816 "timeout_admin_us": 0, 00:19:07.816 "keep_alive_timeout_ms": 10000, 00:19:07.816 "transport_retry_count": 4, 00:19:07.816 "arbitration_burst": 0, 00:19:07.816 "low_priority_weight": 0, 00:19:07.816 "medium_priority_weight": 0, 00:19:07.816 "high_priority_weight": 0, 00:19:07.816 "nvme_adminq_poll_period_us": 10000, 00:19:07.816 "nvme_ioq_poll_period_us": 0, 00:19:07.816 "io_queue_requests": 0, 00:19:07.816 "delay_cmd_submit": true, 00:19:07.816 "bdev_retry_count": 3, 00:19:07.816 "transport_ack_timeout": 0, 00:19:07.816 "ctrlr_loss_timeout_sec": 0, 00:19:07.816 "reconnect_delay_sec": 0, 00:19:07.816 "fast_io_fail_timeout_sec": 0, 00:19:07.816 "generate_uuids": false, 00:19:07.816 "transport_tos": 0, 00:19:07.816 "io_path_stat": false, 00:19:07.816 "allow_accel_sequence": false 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "bdev_nvme_set_hotplug", 00:19:07.816 "params": { 00:19:07.816 "period_us": 100000, 00:19:07.816 "enable": false 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "bdev_malloc_create", 00:19:07.816 "params": { 00:19:07.816 "name": "malloc0", 00:19:07.816 "num_blocks": 8192, 00:19:07.816 "block_size": 4096, 00:19:07.816 "physical_block_size": 4096, 00:19:07.816 "uuid": "d8abee7e-ff90-4e5c-b89c-30cf49e885b1", 00:19:07.816 "optimal_io_boundary": 0 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "bdev_wait_for_examine" 00:19:07.816 } 00:19:07.816 ] 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "subsystem": "nbd", 00:19:07.816 "config": [] 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "subsystem": "scheduler", 00:19:07.816 "config": [ 00:19:07.816 { 00:19:07.816 "method": "framework_set_scheduler", 00:19:07.816 "params": { 00:19:07.816 "name": "static" 00:19:07.816 } 00:19:07.816 } 00:19:07.816 ] 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "subsystem": "nvmf", 00:19:07.816 "config": [ 00:19:07.816 { 00:19:07.816 "method": "nvmf_set_config", 00:19:07.816 "params": { 00:19:07.816 "discovery_filter": "match_any", 00:19:07.816 "admin_cmd_passthru": { 00:19:07.816 "identify_ctrlr": false 00:19:07.816 } 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_set_max_subsystems", 00:19:07.816 "params": { 00:19:07.816 "max_subsystems": 1024 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_set_crdt", 00:19:07.816 "params": { 00:19:07.816 "crdt1": 0, 00:19:07.816 "crdt2": 0, 00:19:07.816 "crdt3": 0 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_create_transport", 00:19:07.816 "params": { 00:19:07.816 "trtype": "TCP", 00:19:07.816 "max_queue_depth": 128, 00:19:07.816 "max_io_qpairs_per_ctrlr": 127, 00:19:07.816 "in_capsule_data_size": 4096, 00:19:07.816 "max_io_size": 131072, 00:19:07.816 "io_unit_size": 131072, 00:19:07.816 "max_aq_depth": 128, 00:19:07.816 "num_shared_buffers": 511, 00:19:07.816 "buf_cache_size": 4294967295, 00:19:07.816 "dif_insert_or_strip": false, 00:19:07.816 "zcopy": false, 00:19:07.816 "c2h_success": false, 00:19:07.816 "sock_priority": 0, 00:19:07.816 "abort_timeout_sec": 1 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_create_subsystem", 00:19:07.816 "params": { 00:19:07.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.816 "allow_any_host": false, 00:19:07.816 "serial_number": "SPDK00000000000001", 00:19:07.816 "model_number": "SPDK bdev Controller", 00:19:07.816 "max_namespaces": 10, 00:19:07.816 "min_cntlid": 1, 00:19:07.816 "max_cntlid": 65519, 00:19:07.816 "ana_reporting": false 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_subsystem_add_host", 00:19:07.816 "params": { 00:19:07.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.816 "host": "nqn.2016-06.io.spdk:host1", 00:19:07.816 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_subsystem_add_ns", 00:19:07.816 "params": { 00:19:07.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.816 "namespace": { 00:19:07.816 "nsid": 1, 00:19:07.816 "bdev_name": "malloc0", 00:19:07.816 "nguid": "D8ABEE7EFF904E5CB89C30CF49E885B1", 00:19:07.816 "uuid": "d8abee7e-ff90-4e5c-b89c-30cf49e885b1" 00:19:07.816 } 00:19:07.816 } 00:19:07.816 }, 00:19:07.816 { 00:19:07.816 "method": "nvmf_subsystem_add_listener", 00:19:07.816 "params": { 00:19:07.816 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:07.816 "listen_address": { 00:19:07.816 "trtype": "TCP", 00:19:07.816 "adrfam": "IPv4", 00:19:07.816 "traddr": "10.0.0.2", 00:19:07.816 "trsvcid": "4420" 00:19:07.816 }, 00:19:07.816 "secure_channel": true 00:19:07.816 } 00:19:07.816 } 00:19:07.816 ] 00:19:07.817 } 00:19:07.817 ] 00:19:07.817 }' 00:19:07.817 14:00:58 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:08.076 14:00:58 -- target/tls.sh@206 -- # bdevperfconf='{ 00:19:08.076 "subsystems": [ 00:19:08.076 { 00:19:08.076 "subsystem": "iobuf", 00:19:08.076 "config": [ 00:19:08.076 { 00:19:08.076 "method": "iobuf_set_options", 00:19:08.076 "params": { 00:19:08.076 "small_pool_count": 8192, 00:19:08.076 "large_pool_count": 1024, 00:19:08.076 "small_bufsize": 8192, 00:19:08.076 "large_bufsize": 135168 00:19:08.076 } 00:19:08.076 } 00:19:08.076 ] 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "subsystem": "sock", 00:19:08.076 "config": [ 00:19:08.076 { 00:19:08.076 "method": "sock_impl_set_options", 00:19:08.076 "params": { 00:19:08.076 "impl_name": "posix", 00:19:08.076 "recv_buf_size": 2097152, 00:19:08.076 "send_buf_size": 2097152, 00:19:08.076 "enable_recv_pipe": true, 00:19:08.076 "enable_quickack": false, 00:19:08.076 "enable_placement_id": 0, 00:19:08.076 "enable_zerocopy_send_server": true, 00:19:08.076 "enable_zerocopy_send_client": false, 00:19:08.076 "zerocopy_threshold": 0, 00:19:08.076 "tls_version": 0, 00:19:08.076 "enable_ktls": false 00:19:08.076 } 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "method": "sock_impl_set_options", 00:19:08.076 "params": { 00:19:08.076 "impl_name": "ssl", 00:19:08.076 "recv_buf_size": 4096, 00:19:08.076 "send_buf_size": 4096, 00:19:08.076 "enable_recv_pipe": true, 00:19:08.076 "enable_quickack": false, 00:19:08.076 "enable_placement_id": 0, 00:19:08.076 "enable_zerocopy_send_server": true, 00:19:08.076 "enable_zerocopy_send_client": false, 00:19:08.076 "zerocopy_threshold": 0, 00:19:08.076 "tls_version": 0, 00:19:08.076 "enable_ktls": false 00:19:08.076 } 00:19:08.076 } 00:19:08.076 ] 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "subsystem": "vmd", 00:19:08.076 "config": [] 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "subsystem": "accel", 00:19:08.076 "config": [ 00:19:08.076 { 00:19:08.076 "method": "accel_set_options", 00:19:08.076 "params": { 00:19:08.076 "small_cache_size": 128, 00:19:08.076 "large_cache_size": 16, 00:19:08.076 "task_count": 2048, 00:19:08.076 "sequence_count": 2048, 00:19:08.076 "buf_count": 2048 00:19:08.076 } 00:19:08.076 } 00:19:08.076 ] 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "subsystem": "bdev", 00:19:08.076 "config": [ 00:19:08.076 { 00:19:08.076 "method": "bdev_set_options", 00:19:08.076 "params": { 00:19:08.076 "bdev_io_pool_size": 65535, 00:19:08.076 "bdev_io_cache_size": 256, 00:19:08.076 "bdev_auto_examine": true, 00:19:08.076 "iobuf_small_cache_size": 128, 00:19:08.076 "iobuf_large_cache_size": 16 00:19:08.076 } 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "method": "bdev_raid_set_options", 00:19:08.076 "params": { 00:19:08.076 "process_window_size_kb": 1024 00:19:08.076 } 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "method": "bdev_iscsi_set_options", 00:19:08.076 "params": { 00:19:08.076 "timeout_sec": 30 00:19:08.076 } 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "method": "bdev_nvme_set_options", 00:19:08.076 "params": { 00:19:08.076 "action_on_timeout": "none", 00:19:08.076 "timeout_us": 0, 00:19:08.076 "timeout_admin_us": 0, 00:19:08.076 "keep_alive_timeout_ms": 10000, 00:19:08.076 "transport_retry_count": 4, 00:19:08.076 "arbitration_burst": 0, 00:19:08.076 "low_priority_weight": 0, 00:19:08.076 "medium_priority_weight": 0, 00:19:08.076 "high_priority_weight": 0, 00:19:08.076 "nvme_adminq_poll_period_us": 10000, 00:19:08.076 "nvme_ioq_poll_period_us": 0, 00:19:08.076 "io_queue_requests": 512, 00:19:08.076 "delay_cmd_submit": true, 00:19:08.076 "bdev_retry_count": 3, 00:19:08.076 "transport_ack_timeout": 0, 00:19:08.076 "ctrlr_loss_timeout_sec": 0, 00:19:08.076 "reconnect_delay_sec": 0, 00:19:08.076 "fast_io_fail_timeout_sec": 0, 00:19:08.076 "generate_uuids": false, 00:19:08.076 "transport_tos": 0, 00:19:08.076 "io_path_stat": false, 00:19:08.076 "allow_accel_sequence": false 00:19:08.076 } 00:19:08.076 }, 00:19:08.076 { 00:19:08.076 "method": "bdev_nvme_attach_controller", 00:19:08.076 "params": { 00:19:08.076 "name": "TLSTEST", 00:19:08.077 "trtype": "TCP", 00:19:08.077 "adrfam": "IPv4", 00:19:08.077 "traddr": "10.0.0.2", 00:19:08.077 "trsvcid": "4420", 00:19:08.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.077 "prchk_reftag": false, 00:19:08.077 "prchk_guard": false, 00:19:08.077 "ctrlr_loss_timeout_sec": 0, 00:19:08.077 "reconnect_delay_sec": 0, 00:19:08.077 "fast_io_fail_timeout_sec": 0, 00:19:08.077 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:08.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:08.077 "hdgst": false, 00:19:08.077 "ddgst": false 00:19:08.077 } 00:19:08.077 }, 00:19:08.077 { 00:19:08.077 "method": "bdev_nvme_set_hotplug", 00:19:08.077 "params": { 00:19:08.077 "period_us": 100000, 00:19:08.077 "enable": false 00:19:08.077 } 00:19:08.077 }, 00:19:08.077 { 00:19:08.077 "method": "bdev_wait_for_examine" 00:19:08.077 } 00:19:08.077 ] 00:19:08.077 }, 00:19:08.077 { 00:19:08.077 "subsystem": "nbd", 00:19:08.077 "config": [] 00:19:08.077 } 00:19:08.077 ] 00:19:08.077 }' 00:19:08.077 14:00:58 -- target/tls.sh@208 -- # killprocess 3287751 00:19:08.077 14:00:58 -- common/autotest_common.sh@926 -- # '[' -z 3287751 ']' 00:19:08.077 14:00:58 -- common/autotest_common.sh@930 -- # kill -0 3287751 00:19:08.077 14:00:58 -- common/autotest_common.sh@931 -- # uname 00:19:08.077 14:00:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:08.077 14:00:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3287751 00:19:08.077 14:00:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:08.077 14:00:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:08.077 14:00:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3287751' 00:19:08.077 killing process with pid 3287751 00:19:08.077 14:00:59 -- common/autotest_common.sh@945 -- # kill 3287751 00:19:08.077 Received shutdown signal, test time was about 10.000000 seconds 00:19:08.077 00:19:08.077 Latency(us) 00:19:08.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.077 =================================================================================================================== 00:19:08.077 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:08.077 14:00:59 -- common/autotest_common.sh@950 -- # wait 3287751 00:19:08.335 14:00:59 -- target/tls.sh@209 -- # killprocess 3287484 00:19:08.335 14:00:59 -- common/autotest_common.sh@926 -- # '[' -z 3287484 ']' 00:19:08.335 14:00:59 -- common/autotest_common.sh@930 -- # kill -0 3287484 00:19:08.335 14:00:59 -- common/autotest_common.sh@931 -- # uname 00:19:08.335 14:00:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:08.335 14:00:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3287484 00:19:08.335 14:00:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:08.335 14:00:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:08.335 14:00:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3287484' 00:19:08.335 killing process with pid 3287484 00:19:08.335 14:00:59 -- common/autotest_common.sh@945 -- # kill 3287484 00:19:08.335 14:00:59 -- common/autotest_common.sh@950 -- # wait 3287484 00:19:08.594 14:00:59 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:08.594 14:00:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:08.594 14:00:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:08.594 14:00:59 -- target/tls.sh@212 -- # echo '{ 00:19:08.594 "subsystems": [ 00:19:08.594 { 00:19:08.594 "subsystem": "iobuf", 00:19:08.594 "config": [ 00:19:08.594 { 00:19:08.594 "method": "iobuf_set_options", 00:19:08.594 "params": { 00:19:08.594 "small_pool_count": 8192, 00:19:08.594 "large_pool_count": 1024, 00:19:08.594 "small_bufsize": 8192, 00:19:08.594 "large_bufsize": 135168 00:19:08.594 } 00:19:08.594 } 00:19:08.594 ] 00:19:08.594 }, 00:19:08.594 { 00:19:08.594 "subsystem": "sock", 00:19:08.594 "config": [ 00:19:08.594 { 00:19:08.594 "method": "sock_impl_set_options", 00:19:08.594 "params": { 00:19:08.594 "impl_name": "posix", 00:19:08.594 "recv_buf_size": 2097152, 00:19:08.594 "send_buf_size": 2097152, 00:19:08.594 "enable_recv_pipe": true, 00:19:08.594 "enable_quickack": false, 00:19:08.594 "enable_placement_id": 0, 00:19:08.594 "enable_zerocopy_send_server": true, 00:19:08.594 "enable_zerocopy_send_client": false, 00:19:08.594 "zerocopy_threshold": 0, 00:19:08.594 "tls_version": 0, 00:19:08.594 "enable_ktls": false 00:19:08.594 } 00:19:08.594 }, 00:19:08.594 { 00:19:08.595 "method": "sock_impl_set_options", 00:19:08.595 "params": { 00:19:08.595 "impl_name": "ssl", 00:19:08.595 "recv_buf_size": 4096, 00:19:08.595 "send_buf_size": 4096, 00:19:08.595 "enable_recv_pipe": true, 00:19:08.595 "enable_quickack": false, 00:19:08.595 "enable_placement_id": 0, 00:19:08.595 "enable_zerocopy_send_server": true, 00:19:08.595 "enable_zerocopy_send_client": false, 00:19:08.595 "zerocopy_threshold": 0, 00:19:08.595 "tls_version": 0, 00:19:08.595 "enable_ktls": false 00:19:08.595 } 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "vmd", 00:19:08.595 "config": [] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "accel", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "accel_set_options", 00:19:08.595 "params": { 00:19:08.595 "small_cache_size": 128, 00:19:08.595 "large_cache_size": 16, 00:19:08.595 "task_count": 2048, 00:19:08.595 "sequence_count": 2048, 00:19:08.595 "buf_count": 2048 00:19:08.595 } 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "bdev", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "bdev_set_options", 00:19:08.595 "params": { 00:19:08.595 "bdev_io_pool_size": 65535, 00:19:08.595 "bdev_io_cache_size": 256, 00:19:08.595 "bdev_auto_examine": true, 00:19:08.595 "iobuf_small_cache_size": 128, 00:19:08.595 "iobuf_large_cache_size": 16 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_raid_set_options", 00:19:08.595 "params": { 00:19:08.595 "process_window_size_kb": 1024 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_iscsi_set_options", 00:19:08.595 "params": { 00:19:08.595 "timeout_sec": 30 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_nvme_set_options", 00:19:08.595 "params": { 00:19:08.595 "action_on_timeout": "none", 00:19:08.595 "timeout_us": 0, 00:19:08.595 "timeout_admin_us": 0, 00:19:08.595 "keep_alive_timeout_ms": 10000, 00:19:08.595 "transport_retry_count": 4, 00:19:08.595 "arbitration_burst": 0, 00:19:08.595 "low_priority_weight": 0, 00:19:08.595 "medium_priority_weight": 0, 00:19:08.595 "high_priority_weight": 0, 00:19:08.595 "nvme_adminq_poll_period_us": 10000, 00:19:08.595 "nvme_ioq_poll_period_us": 0, 00:19:08.595 "io_queue_requests": 0, 00:19:08.595 "delay_cmd_submit": true, 00:19:08.595 "bdev_retry_count": 3, 00:19:08.595 "transport_ack_timeout": 0, 00:19:08.595 "ctrlr_loss_timeout_sec": 0, 00:19:08.595 "reconnect_delay_sec": 0, 00:19:08.595 "fast_io_fail_timeout_sec": 0, 00:19:08.595 "generate_uuids": false, 00:19:08.595 "transport_tos": 0, 00:19:08.595 "io_path_stat": false, 00:19:08.595 "allow_accel_sequence": false 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_nvme_set_hotplug", 00:19:08.595 "params": { 00:19:08.595 "period_us": 100000, 00:19:08.595 "enable": false 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_malloc_create", 00:19:08.595 "params": { 00:19:08.595 "name": "malloc0", 00:19:08.595 "num_blocks": 8192, 00:19:08.595 "block_size": 4096, 00:19:08.595 "physical_block_size": 4096, 00:19:08.595 "uuid": "d8abee7e-ff90-4e5c-b89c-30cf49e885b1", 00:19:08.595 "optimal_io_boundary": 0 00:19:08.595 } 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "method": "bdev_wait_for_examine" 00:19:08.595 } 00:19:08.595 ] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "nbd", 00:19:08.595 "config": [] 00:19:08.595 }, 00:19:08.595 { 00:19:08.595 "subsystem": "scheduler", 00:19:08.595 "config": [ 00:19:08.595 { 00:19:08.595 "method": "framework_set_scheduler", 00:19:08.596 "params": { 00:19:08.596 "name": "static" 00:19:08.596 } 00:19:08.596 } 00:19:08.596 ] 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "subsystem": "nvmf", 00:19:08.596 "config": [ 00:19:08.596 { 00:19:08.596 "method": "nvmf_set_config", 00:19:08.596 "params": { 00:19:08.596 "discovery_filter": "match_any", 00:19:08.596 "admin_cmd_passthru": { 00:19:08.596 "identify_ctrlr": false 00:19:08.596 } 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_set_max_subsystems", 00:19:08.596 "params": { 00:19:08.596 "max_subsystems": 1024 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_set_crdt", 00:19:08.596 "params": { 00:19:08.596 "crdt1": 0, 00:19:08.596 "crdt2": 0, 00:19:08.596 "crdt3": 0 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_create_transport", 00:19:08.596 "params": { 00:19:08.596 "trtype": "TCP", 00:19:08.596 "max_queue_depth": 128, 00:19:08.596 "max_io_qpairs_per_ctrlr": 127, 00:19:08.596 "in_capsule_data_size": 4096, 00:19:08.596 "max_io_size": 131072, 00:19:08.596 "io_unit_size": 131072, 00:19:08.596 "max_aq_depth": 128, 00:19:08.596 "num_shared_buffers": 511, 00:19:08.596 "buf_cache_size": 4294967295, 00:19:08.596 "dif_insert_or_strip": false, 00:19:08.596 "zcopy": false, 00:19:08.596 "c2h_success": false, 00:19:08.596 "sock_priority": 0, 00:19:08.596 "abort_timeout_sec": 1 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_create_subsystem", 00:19:08.596 "params": { 00:19:08.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.596 "allow_any_host": false, 00:19:08.596 "serial_number": "SPDK00000000000001", 00:19:08.596 "model_number": "SPDK bdev Controller", 00:19:08.596 "max_namespaces": 10, 00:19:08.596 "min_cntlid": 1, 00:19:08.596 "max_cntlid": 65519, 00:19:08.596 "ana_reporting": false 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_subsystem_add_host", 00:19:08.596 "params": { 00:19:08.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.596 "host": "nqn.2016-06.io.spdk:host1", 00:19:08.596 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_subsystem_add_ns", 00:19:08.596 "params": { 00:19:08.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.596 "namespace": { 00:19:08.596 "nsid": 1, 00:19:08.596 "bdev_name": "malloc0", 00:19:08.596 "nguid": "D8ABEE7EFF904E5CB89C30CF49E885B1", 00:19:08.596 "uuid": "d8abee7e-ff90-4e5c-b89c-30cf49e885b1" 00:19:08.596 } 00:19:08.596 } 00:19:08.596 }, 00:19:08.596 { 00:19:08.596 "method": "nvmf_subsystem_add_listener", 00:19:08.596 "params": { 00:19:08.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.596 "listen_address": { 00:19:08.596 "trtype": "TCP", 00:19:08.596 "adrfam": "IPv4", 00:19:08.596 "traddr": "10.0.0.2", 00:19:08.596 "trsvcid": "4420" 00:19:08.596 }, 00:19:08.596 "secure_channel": true 00:19:08.596 } 00:19:08.596 } 00:19:08.596 ] 00:19:08.596 } 00:19:08.596 ] 00:19:08.596 }' 00:19:08.596 14:00:59 -- common/autotest_common.sh@10 -- # set +x 00:19:08.596 14:00:59 -- nvmf/common.sh@469 -- # nvmfpid=3288227 00:19:08.596 14:00:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:08.596 14:00:59 -- nvmf/common.sh@470 -- # waitforlisten 3288227 00:19:08.596 14:00:59 -- common/autotest_common.sh@819 -- # '[' -z 3288227 ']' 00:19:08.596 14:00:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.596 14:00:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:08.596 14:00:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.596 14:00:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:08.596 14:00:59 -- common/autotest_common.sh@10 -- # set +x 00:19:08.596 [2024-07-23 14:00:59.555149] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:08.596 [2024-07-23 14:00:59.555198] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.596 EAL: No free 2048 kB hugepages reported on node 1 00:19:08.855 [2024-07-23 14:00:59.611887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.855 [2024-07-23 14:00:59.678844] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:08.855 [2024-07-23 14:00:59.678972] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.855 [2024-07-23 14:00:59.678980] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.855 [2024-07-23 14:00:59.678986] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.855 [2024-07-23 14:00:59.679001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.113 [2024-07-23 14:00:59.872330] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:09.113 [2024-07-23 14:00:59.904374] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.113 [2024-07-23 14:00:59.904554] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.373 14:01:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:09.373 14:01:00 -- common/autotest_common.sh@852 -- # return 0 00:19:09.373 14:01:00 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:09.373 14:01:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:09.373 14:01:00 -- common/autotest_common.sh@10 -- # set +x 00:19:09.373 14:01:00 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:09.373 14:01:00 -- target/tls.sh@216 -- # bdevperf_pid=3288308 00:19:09.373 14:01:00 -- target/tls.sh@217 -- # waitforlisten 3288308 /var/tmp/bdevperf.sock 00:19:09.373 14:01:00 -- common/autotest_common.sh@819 -- # '[' -z 3288308 ']' 00:19:09.373 14:01:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.373 14:01:00 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:09.373 14:01:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:09.373 14:01:00 -- target/tls.sh@213 -- # echo '{ 00:19:09.373 "subsystems": [ 00:19:09.373 { 00:19:09.373 "subsystem": "iobuf", 00:19:09.373 "config": [ 00:19:09.373 { 00:19:09.373 "method": "iobuf_set_options", 00:19:09.373 "params": { 00:19:09.373 "small_pool_count": 8192, 00:19:09.373 "large_pool_count": 1024, 00:19:09.373 "small_bufsize": 8192, 00:19:09.373 "large_bufsize": 135168 00:19:09.373 } 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "subsystem": "sock", 00:19:09.373 "config": [ 00:19:09.373 { 00:19:09.373 "method": "sock_impl_set_options", 00:19:09.373 "params": { 00:19:09.373 "impl_name": "posix", 00:19:09.373 "recv_buf_size": 2097152, 00:19:09.373 "send_buf_size": 2097152, 00:19:09.373 "enable_recv_pipe": true, 00:19:09.373 "enable_quickack": false, 00:19:09.373 "enable_placement_id": 0, 00:19:09.373 "enable_zerocopy_send_server": true, 00:19:09.373 "enable_zerocopy_send_client": false, 00:19:09.373 "zerocopy_threshold": 0, 00:19:09.373 "tls_version": 0, 00:19:09.373 "enable_ktls": false 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "sock_impl_set_options", 00:19:09.373 "params": { 00:19:09.373 "impl_name": "ssl", 00:19:09.373 "recv_buf_size": 4096, 00:19:09.373 "send_buf_size": 4096, 00:19:09.373 "enable_recv_pipe": true, 00:19:09.373 "enable_quickack": false, 00:19:09.373 "enable_placement_id": 0, 00:19:09.373 "enable_zerocopy_send_server": true, 00:19:09.373 "enable_zerocopy_send_client": false, 00:19:09.373 "zerocopy_threshold": 0, 00:19:09.373 "tls_version": 0, 00:19:09.373 "enable_ktls": false 00:19:09.373 } 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "subsystem": "vmd", 00:19:09.373 "config": [] 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "subsystem": "accel", 00:19:09.373 "config": [ 00:19:09.373 { 00:19:09.373 "method": "accel_set_options", 00:19:09.373 "params": { 00:19:09.373 "small_cache_size": 128, 00:19:09.373 "large_cache_size": 16, 00:19:09.373 "task_count": 2048, 00:19:09.373 "sequence_count": 2048, 00:19:09.373 "buf_count": 2048 00:19:09.373 } 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "subsystem": "bdev", 00:19:09.373 "config": [ 00:19:09.373 { 00:19:09.373 "method": "bdev_set_options", 00:19:09.373 "params": { 00:19:09.373 "bdev_io_pool_size": 65535, 00:19:09.373 "bdev_io_cache_size": 256, 00:19:09.373 "bdev_auto_examine": true, 00:19:09.373 "iobuf_small_cache_size": 128, 00:19:09.373 "iobuf_large_cache_size": 16 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "bdev_raid_set_options", 00:19:09.373 "params": { 00:19:09.373 "process_window_size_kb": 1024 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "bdev_iscsi_set_options", 00:19:09.373 "params": { 00:19:09.373 "timeout_sec": 30 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "bdev_nvme_set_options", 00:19:09.373 "params": { 00:19:09.373 "action_on_timeout": "none", 00:19:09.373 "timeout_us": 0, 00:19:09.373 "timeout_admin_us": 0, 00:19:09.373 "keep_alive_timeout_ms": 10000, 00:19:09.373 "transport_retry_count": 4, 00:19:09.373 "arbitration_burst": 0, 00:19:09.373 "low_priority_weight": 0, 00:19:09.373 "medium_priority_weight": 0, 00:19:09.373 "high_priority_weight": 0, 00:19:09.373 "nvme_adminq_poll_period_us": 10000, 00:19:09.373 "nvme_ioq_poll_period_us": 0, 00:19:09.373 "io_queue_requests": 512, 00:19:09.373 "delay_cmd_submit": true, 00:19:09.373 "bdev_retry_count": 3, 00:19:09.373 "transport_ack_timeout": 0, 00:19:09.373 "ctrlr_loss_timeout_sec": 0, 00:19:09.373 "reconnect_delay_sec": 0, 00:19:09.373 "fast_io_fail_timeout_sec": 0, 00:19:09.373 "generate_uuids": false, 00:19:09.373 "transport_tos": 0, 00:19:09.373 "io_path_stat": false, 00:19:09.373 "allow_accel_sequence": false 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "bdev_nvme_attach_controller", 00:19:09.373 "params": { 00:19:09.373 "name": "TLSTEST", 00:19:09.373 "trtype": "TCP", 00:19:09.373 "adrfam": "IPv4", 00:19:09.373 "traddr": "10.0.0.2", 00:19:09.373 "trsvcid": "4420", 00:19:09.373 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:09.373 "prchk_reftag": false, 00:19:09.373 "prchk_guard": false, 00:19:09.373 "ctrlr_loss_timeout_sec": 0, 00:19:09.373 "reconnect_delay_sec": 0, 00:19:09.373 "fast_io_fail_timeout_sec": 0, 00:19:09.373 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:19:09.373 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:09.373 "hdgst": 14:01:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.373 false, 00:19:09.373 "ddgst": false 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "bdev_nvme_set_hotplug", 00:19:09.373 "params": { 00:19:09.373 "period_us": 100000, 00:19:09.373 "enable": false 00:19:09.373 } 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "method": "bdev_wait_for_examine" 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }, 00:19:09.373 { 00:19:09.373 "subsystem": "nbd", 00:19:09.373 "config": [] 00:19:09.373 } 00:19:09.373 ] 00:19:09.373 }' 00:19:09.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.373 14:01:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:09.374 14:01:00 -- common/autotest_common.sh@10 -- # set +x 00:19:09.634 [2024-07-23 14:01:00.423318] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:09.634 [2024-07-23 14:01:00.423369] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288308 ] 00:19:09.634 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.634 [2024-07-23 14:01:00.474347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.634 [2024-07-23 14:01:00.544848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.893 [2024-07-23 14:01:00.678844] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.459 14:01:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:10.459 14:01:01 -- common/autotest_common.sh@852 -- # return 0 00:19:10.459 14:01:01 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:10.459 Running I/O for 10 seconds... 00:19:20.438 00:19:20.438 Latency(us) 00:19:20.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.438 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:20.438 Verification LBA range: start 0x0 length 0x2000 00:19:20.438 TLSTESTn1 : 10.03 1682.45 6.57 0.00 0.00 75979.01 7465.41 103033.99 00:19:20.438 =================================================================================================================== 00:19:20.438 Total : 1682.45 6.57 0.00 0.00 75979.01 7465.41 103033.99 00:19:20.438 0 00:19:20.438 14:01:11 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:20.438 14:01:11 -- target/tls.sh@223 -- # killprocess 3288308 00:19:20.438 14:01:11 -- common/autotest_common.sh@926 -- # '[' -z 3288308 ']' 00:19:20.438 14:01:11 -- common/autotest_common.sh@930 -- # kill -0 3288308 00:19:20.438 14:01:11 -- common/autotest_common.sh@931 -- # uname 00:19:20.438 14:01:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:20.438 14:01:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3288308 00:19:20.438 14:01:11 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:20.438 14:01:11 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:20.438 14:01:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3288308' 00:19:20.438 killing process with pid 3288308 00:19:20.438 14:01:11 -- common/autotest_common.sh@945 -- # kill 3288308 00:19:20.438 Received shutdown signal, test time was about 10.000000 seconds 00:19:20.438 00:19:20.438 Latency(us) 00:19:20.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:20.438 =================================================================================================================== 00:19:20.438 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:20.438 14:01:11 -- common/autotest_common.sh@950 -- # wait 3288308 00:19:20.696 14:01:11 -- target/tls.sh@224 -- # killprocess 3288227 00:19:20.696 14:01:11 -- common/autotest_common.sh@926 -- # '[' -z 3288227 ']' 00:19:20.696 14:01:11 -- common/autotest_common.sh@930 -- # kill -0 3288227 00:19:20.696 14:01:11 -- common/autotest_common.sh@931 -- # uname 00:19:20.696 14:01:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:20.696 14:01:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3288227 00:19:20.696 14:01:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:20.696 14:01:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:20.696 14:01:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3288227' 00:19:20.696 killing process with pid 3288227 00:19:20.696 14:01:11 -- common/autotest_common.sh@945 -- # kill 3288227 00:19:20.696 14:01:11 -- common/autotest_common.sh@950 -- # wait 3288227 00:19:20.955 14:01:11 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:19:20.955 14:01:11 -- target/tls.sh@227 -- # cleanup 00:19:20.955 14:01:11 -- target/tls.sh@15 -- # process_shm --id 0 00:19:20.955 14:01:11 -- common/autotest_common.sh@796 -- # type=--id 00:19:20.955 14:01:11 -- common/autotest_common.sh@797 -- # id=0 00:19:20.955 14:01:11 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:20.955 14:01:11 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:20.955 14:01:11 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:20.955 14:01:11 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:20.955 14:01:11 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:20.955 14:01:11 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:20.955 nvmf_trace.0 00:19:20.955 14:01:11 -- common/autotest_common.sh@811 -- # return 0 00:19:20.955 14:01:11 -- target/tls.sh@16 -- # killprocess 3288308 00:19:20.955 14:01:11 -- common/autotest_common.sh@926 -- # '[' -z 3288308 ']' 00:19:20.955 14:01:11 -- common/autotest_common.sh@930 -- # kill -0 3288308 00:19:20.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3288308) - No such process 00:19:20.955 14:01:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3288308 is not found' 00:19:20.955 Process with pid 3288308 is not found 00:19:20.955 14:01:11 -- target/tls.sh@17 -- # nvmftestfini 00:19:20.955 14:01:11 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:20.955 14:01:11 -- nvmf/common.sh@116 -- # sync 00:19:20.955 14:01:11 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:20.955 14:01:11 -- nvmf/common.sh@119 -- # set +e 00:19:20.955 14:01:11 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:20.955 14:01:11 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:20.955 rmmod nvme_tcp 00:19:20.955 rmmod nvme_fabrics 00:19:20.955 rmmod nvme_keyring 00:19:21.214 14:01:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:21.214 14:01:11 -- nvmf/common.sh@123 -- # set -e 00:19:21.214 14:01:11 -- nvmf/common.sh@124 -- # return 0 00:19:21.215 14:01:11 -- nvmf/common.sh@477 -- # '[' -n 3288227 ']' 00:19:21.215 14:01:11 -- nvmf/common.sh@478 -- # killprocess 3288227 00:19:21.215 14:01:11 -- common/autotest_common.sh@926 -- # '[' -z 3288227 ']' 00:19:21.215 14:01:11 -- common/autotest_common.sh@930 -- # kill -0 3288227 00:19:21.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3288227) - No such process 00:19:21.215 14:01:11 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3288227 is not found' 00:19:21.215 Process with pid 3288227 is not found 00:19:21.215 14:01:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:21.215 14:01:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:21.215 14:01:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:21.215 14:01:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:21.215 14:01:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:21.215 14:01:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:21.215 14:01:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:21.215 14:01:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.118 14:01:14 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:23.118 14:01:14 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:19:23.118 00:19:23.118 real 1m12.179s 00:19:23.118 user 1m49.611s 00:19:23.118 sys 0m24.358s 00:19:23.118 14:01:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:23.118 14:01:14 -- common/autotest_common.sh@10 -- # set +x 00:19:23.118 ************************************ 00:19:23.118 END TEST nvmf_tls 00:19:23.118 ************************************ 00:19:23.118 14:01:14 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:23.118 14:01:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:23.118 14:01:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:23.118 14:01:14 -- common/autotest_common.sh@10 -- # set +x 00:19:23.118 ************************************ 00:19:23.118 START TEST nvmf_fips 00:19:23.118 ************************************ 00:19:23.119 14:01:14 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:23.378 * Looking for test storage... 00:19:23.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:23.378 14:01:14 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:23.378 14:01:14 -- nvmf/common.sh@7 -- # uname -s 00:19:23.378 14:01:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:23.378 14:01:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:23.378 14:01:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:23.378 14:01:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:23.378 14:01:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:23.378 14:01:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:23.378 14:01:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:23.378 14:01:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:23.378 14:01:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:23.378 14:01:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:23.378 14:01:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.378 14:01:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:23.378 14:01:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:23.378 14:01:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:23.378 14:01:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:23.378 14:01:14 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:23.378 14:01:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:23.378 14:01:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.378 14:01:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.379 14:01:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.379 14:01:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.379 14:01:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.379 14:01:14 -- paths/export.sh@5 -- # export PATH 00:19:23.379 14:01:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:23.379 14:01:14 -- nvmf/common.sh@46 -- # : 0 00:19:23.379 14:01:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:23.379 14:01:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:23.379 14:01:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:23.379 14:01:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:23.379 14:01:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:23.379 14:01:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:23.379 14:01:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:23.379 14:01:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:23.379 14:01:14 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:23.379 14:01:14 -- fips/fips.sh@89 -- # check_openssl_version 00:19:23.379 14:01:14 -- fips/fips.sh@83 -- # local target=3.0.0 00:19:23.379 14:01:14 -- fips/fips.sh@85 -- # openssl version 00:19:23.379 14:01:14 -- fips/fips.sh@85 -- # awk '{print $2}' 00:19:23.379 14:01:14 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:19:23.379 14:01:14 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:19:23.379 14:01:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:23.379 14:01:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:23.379 14:01:14 -- scripts/common.sh@335 -- # IFS=.-: 00:19:23.379 14:01:14 -- scripts/common.sh@335 -- # read -ra ver1 00:19:23.379 14:01:14 -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.379 14:01:14 -- scripts/common.sh@336 -- # read -ra ver2 00:19:23.379 14:01:14 -- scripts/common.sh@337 -- # local 'op=>=' 00:19:23.379 14:01:14 -- scripts/common.sh@339 -- # ver1_l=3 00:19:23.379 14:01:14 -- scripts/common.sh@340 -- # ver2_l=3 00:19:23.379 14:01:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:23.379 14:01:14 -- scripts/common.sh@343 -- # case "$op" in 00:19:23.379 14:01:14 -- scripts/common.sh@347 -- # : 1 00:19:23.379 14:01:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:23.379 14:01:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.379 14:01:14 -- scripts/common.sh@364 -- # decimal 3 00:19:23.379 14:01:14 -- scripts/common.sh@352 -- # local d=3 00:19:23.379 14:01:14 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:23.379 14:01:14 -- scripts/common.sh@354 -- # echo 3 00:19:23.379 14:01:14 -- scripts/common.sh@364 -- # ver1[v]=3 00:19:23.379 14:01:14 -- scripts/common.sh@365 -- # decimal 3 00:19:23.379 14:01:14 -- scripts/common.sh@352 -- # local d=3 00:19:23.379 14:01:14 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:23.379 14:01:14 -- scripts/common.sh@354 -- # echo 3 00:19:23.379 14:01:14 -- scripts/common.sh@365 -- # ver2[v]=3 00:19:23.379 14:01:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:23.379 14:01:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:23.379 14:01:14 -- scripts/common.sh@363 -- # (( v++ )) 00:19:23.379 14:01:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.379 14:01:14 -- scripts/common.sh@364 -- # decimal 0 00:19:23.379 14:01:14 -- scripts/common.sh@352 -- # local d=0 00:19:23.379 14:01:14 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:23.379 14:01:14 -- scripts/common.sh@354 -- # echo 0 00:19:23.379 14:01:14 -- scripts/common.sh@364 -- # ver1[v]=0 00:19:23.379 14:01:14 -- scripts/common.sh@365 -- # decimal 0 00:19:23.379 14:01:14 -- scripts/common.sh@352 -- # local d=0 00:19:23.379 14:01:14 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:23.379 14:01:14 -- scripts/common.sh@354 -- # echo 0 00:19:23.379 14:01:14 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:23.379 14:01:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:23.379 14:01:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:23.379 14:01:14 -- scripts/common.sh@363 -- # (( v++ )) 00:19:23.379 14:01:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.379 14:01:14 -- scripts/common.sh@364 -- # decimal 9 00:19:23.379 14:01:14 -- scripts/common.sh@352 -- # local d=9 00:19:23.379 14:01:14 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:19:23.379 14:01:14 -- scripts/common.sh@354 -- # echo 9 00:19:23.379 14:01:14 -- scripts/common.sh@364 -- # ver1[v]=9 00:19:23.379 14:01:14 -- scripts/common.sh@365 -- # decimal 0 00:19:23.379 14:01:14 -- scripts/common.sh@352 -- # local d=0 00:19:23.379 14:01:14 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:23.379 14:01:14 -- scripts/common.sh@354 -- # echo 0 00:19:23.379 14:01:14 -- scripts/common.sh@365 -- # ver2[v]=0 00:19:23.379 14:01:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:23.379 14:01:14 -- scripts/common.sh@366 -- # return 0 00:19:23.379 14:01:14 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:19:23.379 14:01:14 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:23.379 14:01:14 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:19:23.379 14:01:14 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:23.379 14:01:14 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:23.379 14:01:14 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:19:23.379 14:01:14 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:19:23.379 14:01:14 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:19:23.379 14:01:14 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:19:23.379 14:01:14 -- fips/fips.sh@114 -- # build_openssl_config 00:19:23.379 14:01:14 -- fips/fips.sh@37 -- # cat 00:19:23.379 14:01:14 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:19:23.379 14:01:14 -- fips/fips.sh@58 -- # cat - 00:19:23.379 14:01:14 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:23.379 14:01:14 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:23.379 14:01:14 -- fips/fips.sh@117 -- # mapfile -t providers 00:19:23.379 14:01:14 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:19:23.379 14:01:14 -- fips/fips.sh@117 -- # openssl list -providers 00:19:23.379 14:01:14 -- fips/fips.sh@117 -- # grep name 00:19:23.379 14:01:14 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:23.379 14:01:14 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:23.380 14:01:14 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:23.380 14:01:14 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:23.380 14:01:14 -- common/autotest_common.sh@640 -- # local es=0 00:19:23.380 14:01:14 -- fips/fips.sh@128 -- # : 00:19:23.380 14:01:14 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:23.380 14:01:14 -- common/autotest_common.sh@628 -- # local arg=openssl 00:19:23.380 14:01:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.380 14:01:14 -- common/autotest_common.sh@632 -- # type -t openssl 00:19:23.380 14:01:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.380 14:01:14 -- common/autotest_common.sh@634 -- # type -P openssl 00:19:23.380 14:01:14 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:23.380 14:01:14 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:19:23.380 14:01:14 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:19:23.380 14:01:14 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:19:23.380 Error setting digest 00:19:23.380 00220A8B317F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:19:23.380 00220A8B317F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:19:23.380 14:01:14 -- common/autotest_common.sh@643 -- # es=1 00:19:23.380 14:01:14 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:23.380 14:01:14 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:23.380 14:01:14 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:23.380 14:01:14 -- fips/fips.sh@131 -- # nvmftestinit 00:19:23.380 14:01:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:23.380 14:01:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.380 14:01:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:23.380 14:01:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:23.380 14:01:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:23.380 14:01:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.380 14:01:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:23.380 14:01:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.380 14:01:14 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:23.380 14:01:14 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:23.380 14:01:14 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:23.380 14:01:14 -- common/autotest_common.sh@10 -- # set +x 00:19:28.719 14:01:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:28.719 14:01:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:28.719 14:01:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:28.719 14:01:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:28.719 14:01:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:28.719 14:01:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:28.719 14:01:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:28.719 14:01:19 -- nvmf/common.sh@294 -- # net_devs=() 00:19:28.719 14:01:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:28.719 14:01:19 -- nvmf/common.sh@295 -- # e810=() 00:19:28.719 14:01:19 -- nvmf/common.sh@295 -- # local -ga e810 00:19:28.719 14:01:19 -- nvmf/common.sh@296 -- # x722=() 00:19:28.719 14:01:19 -- nvmf/common.sh@296 -- # local -ga x722 00:19:28.719 14:01:19 -- nvmf/common.sh@297 -- # mlx=() 00:19:28.719 14:01:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:28.719 14:01:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.719 14:01:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:28.719 14:01:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:28.719 14:01:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:28.719 14:01:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.719 14:01:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:28.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:28.719 14:01:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:28.719 14:01:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:28.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:28.719 14:01:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:28.719 14:01:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:28.719 14:01:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.719 14:01:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.719 14:01:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.719 14:01:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.719 14:01:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:28.719 Found net devices under 0000:86:00.0: cvl_0_0 00:19:28.719 14:01:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.719 14:01:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:28.719 14:01:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.719 14:01:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:28.720 14:01:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.720 14:01:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:28.720 Found net devices under 0000:86:00.1: cvl_0_1 00:19:28.720 14:01:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.720 14:01:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:28.720 14:01:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:28.720 14:01:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:28.720 14:01:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:28.720 14:01:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:28.720 14:01:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:28.720 14:01:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:28.720 14:01:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:28.720 14:01:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:28.720 14:01:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:28.720 14:01:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:28.720 14:01:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:28.720 14:01:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:28.720 14:01:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:28.720 14:01:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:28.720 14:01:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:28.720 14:01:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:28.720 14:01:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:28.979 14:01:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:28.979 14:01:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:28.979 14:01:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:28.979 14:01:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:28.979 14:01:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:28.979 14:01:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:28.979 14:01:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:28.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:28.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:19:28.979 00:19:28.979 --- 10.0.0.2 ping statistics --- 00:19:28.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.979 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:19:28.979 14:01:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:28.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:28.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:19:28.979 00:19:28.979 --- 10.0.0.1 ping statistics --- 00:19:28.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:28.979 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:19:28.979 14:01:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:28.979 14:01:19 -- nvmf/common.sh@410 -- # return 0 00:19:28.979 14:01:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:28.979 14:01:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:28.979 14:01:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:28.979 14:01:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:28.979 14:01:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:28.979 14:01:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:28.979 14:01:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:28.979 14:01:19 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:28.979 14:01:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:28.979 14:01:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:28.979 14:01:19 -- common/autotest_common.sh@10 -- # set +x 00:19:28.979 14:01:19 -- nvmf/common.sh@469 -- # nvmfpid=3293741 00:19:28.979 14:01:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:28.979 14:01:19 -- nvmf/common.sh@470 -- # waitforlisten 3293741 00:19:28.979 14:01:19 -- common/autotest_common.sh@819 -- # '[' -z 3293741 ']' 00:19:28.979 14:01:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.980 14:01:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:28.980 14:01:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.980 14:01:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:28.980 14:01:19 -- common/autotest_common.sh@10 -- # set +x 00:19:29.237 [2024-07-23 14:01:20.000841] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:29.237 [2024-07-23 14:01:20.000888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.237 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.237 [2024-07-23 14:01:20.061570] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.237 [2024-07-23 14:01:20.145289] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:29.237 [2024-07-23 14:01:20.145402] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.237 [2024-07-23 14:01:20.145409] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.237 [2024-07-23 14:01:20.145416] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.237 [2024-07-23 14:01:20.145431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.802 14:01:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:29.802 14:01:20 -- common/autotest_common.sh@852 -- # return 0 00:19:29.802 14:01:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:29.802 14:01:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:29.802 14:01:20 -- common/autotest_common.sh@10 -- # set +x 00:19:29.802 14:01:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.802 14:01:20 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:29.802 14:01:20 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:29.802 14:01:20 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.802 14:01:20 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:29.802 14:01:20 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.802 14:01:20 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.802 14:01:20 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:29.802 14:01:20 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:30.060 [2024-07-23 14:01:20.965725] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.060 [2024-07-23 14:01:20.981734] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:30.060 [2024-07-23 14:01:20.981902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:30.060 malloc0 00:19:30.060 14:01:21 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:30.060 14:01:21 -- fips/fips.sh@148 -- # bdevperf_pid=3293957 00:19:30.060 14:01:21 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:30.060 14:01:21 -- fips/fips.sh@149 -- # waitforlisten 3293957 /var/tmp/bdevperf.sock 00:19:30.060 14:01:21 -- common/autotest_common.sh@819 -- # '[' -z 3293957 ']' 00:19:30.060 14:01:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.060 14:01:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:30.060 14:01:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.060 14:01:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:30.060 14:01:21 -- common/autotest_common.sh@10 -- # set +x 00:19:30.318 [2024-07-23 14:01:21.087660] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:19:30.318 [2024-07-23 14:01:21.087706] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3293957 ] 00:19:30.318 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.318 [2024-07-23 14:01:21.137230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.318 [2024-07-23 14:01:21.206971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.883 14:01:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:30.883 14:01:21 -- common/autotest_common.sh@852 -- # return 0 00:19:30.883 14:01:21 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:31.141 [2024-07-23 14:01:22.001492] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:31.142 TLSTESTn1 00:19:31.142 14:01:22 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:31.399 Running I/O for 10 seconds... 00:19:41.371 00:19:41.371 Latency(us) 00:19:41.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.371 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:41.371 Verification LBA range: start 0x0 length 0x2000 00:19:41.371 TLSTESTn1 : 10.05 1629.84 6.37 0.00 0.00 78399.69 12480.33 112152.04 00:19:41.371 =================================================================================================================== 00:19:41.371 Total : 1629.84 6.37 0.00 0.00 78399.69 12480.33 112152.04 00:19:41.371 0 00:19:41.371 14:01:32 -- fips/fips.sh@1 -- # cleanup 00:19:41.371 14:01:32 -- fips/fips.sh@15 -- # process_shm --id 0 00:19:41.371 14:01:32 -- common/autotest_common.sh@796 -- # type=--id 00:19:41.371 14:01:32 -- common/autotest_common.sh@797 -- # id=0 00:19:41.371 14:01:32 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:19:41.371 14:01:32 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:41.371 14:01:32 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:19:41.371 14:01:32 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:19:41.371 14:01:32 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:19:41.371 14:01:32 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:41.371 nvmf_trace.0 00:19:41.371 14:01:32 -- common/autotest_common.sh@811 -- # return 0 00:19:41.371 14:01:32 -- fips/fips.sh@16 -- # killprocess 3293957 00:19:41.371 14:01:32 -- common/autotest_common.sh@926 -- # '[' -z 3293957 ']' 00:19:41.371 14:01:32 -- common/autotest_common.sh@930 -- # kill -0 3293957 00:19:41.371 14:01:32 -- common/autotest_common.sh@931 -- # uname 00:19:41.371 14:01:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.371 14:01:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3293957 00:19:41.371 14:01:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:41.371 14:01:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:41.371 14:01:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3293957' 00:19:41.371 killing process with pid 3293957 00:19:41.371 14:01:32 -- common/autotest_common.sh@945 -- # kill 3293957 00:19:41.371 Received shutdown signal, test time was about 10.000000 seconds 00:19:41.371 00:19:41.371 Latency(us) 00:19:41.371 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:41.371 =================================================================================================================== 00:19:41.371 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:41.371 14:01:32 -- common/autotest_common.sh@950 -- # wait 3293957 00:19:41.630 14:01:32 -- fips/fips.sh@17 -- # nvmftestfini 00:19:41.630 14:01:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:41.630 14:01:32 -- nvmf/common.sh@116 -- # sync 00:19:41.630 14:01:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:41.630 14:01:32 -- nvmf/common.sh@119 -- # set +e 00:19:41.630 14:01:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:41.630 14:01:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:41.630 rmmod nvme_tcp 00:19:41.630 rmmod nvme_fabrics 00:19:41.630 rmmod nvme_keyring 00:19:41.630 14:01:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:41.630 14:01:32 -- nvmf/common.sh@123 -- # set -e 00:19:41.630 14:01:32 -- nvmf/common.sh@124 -- # return 0 00:19:41.630 14:01:32 -- nvmf/common.sh@477 -- # '[' -n 3293741 ']' 00:19:41.630 14:01:32 -- nvmf/common.sh@478 -- # killprocess 3293741 00:19:41.630 14:01:32 -- common/autotest_common.sh@926 -- # '[' -z 3293741 ']' 00:19:41.630 14:01:32 -- common/autotest_common.sh@930 -- # kill -0 3293741 00:19:41.630 14:01:32 -- common/autotest_common.sh@931 -- # uname 00:19:41.630 14:01:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:41.630 14:01:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3293741 00:19:41.892 14:01:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:41.892 14:01:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:41.892 14:01:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3293741' 00:19:41.892 killing process with pid 3293741 00:19:41.892 14:01:32 -- common/autotest_common.sh@945 -- # kill 3293741 00:19:41.892 14:01:32 -- common/autotest_common.sh@950 -- # wait 3293741 00:19:41.892 14:01:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:41.892 14:01:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:41.892 14:01:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:41.892 14:01:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.892 14:01:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:41.892 14:01:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.892 14:01:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.892 14:01:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.434 14:01:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:44.434 14:01:34 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:44.434 00:19:44.434 real 0m20.878s 00:19:44.434 user 0m22.887s 00:19:44.434 sys 0m8.724s 00:19:44.434 14:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:44.434 14:01:34 -- common/autotest_common.sh@10 -- # set +x 00:19:44.434 ************************************ 00:19:44.434 END TEST nvmf_fips 00:19:44.434 ************************************ 00:19:44.434 14:01:35 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:19:44.434 14:01:35 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:44.434 14:01:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:44.434 14:01:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:44.434 14:01:35 -- common/autotest_common.sh@10 -- # set +x 00:19:44.434 ************************************ 00:19:44.434 START TEST nvmf_fuzz 00:19:44.434 ************************************ 00:19:44.434 14:01:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:19:44.434 * Looking for test storage... 00:19:44.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:44.434 14:01:35 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:44.434 14:01:35 -- nvmf/common.sh@7 -- # uname -s 00:19:44.434 14:01:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:44.434 14:01:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:44.434 14:01:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:44.434 14:01:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:44.434 14:01:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:44.434 14:01:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:44.434 14:01:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:44.434 14:01:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:44.434 14:01:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:44.434 14:01:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:44.434 14:01:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.434 14:01:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:44.434 14:01:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:44.434 14:01:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:44.434 14:01:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:44.434 14:01:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:44.434 14:01:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:44.434 14:01:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:44.434 14:01:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:44.434 14:01:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.434 14:01:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.434 14:01:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.434 14:01:35 -- paths/export.sh@5 -- # export PATH 00:19:44.434 14:01:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:44.434 14:01:35 -- nvmf/common.sh@46 -- # : 0 00:19:44.434 14:01:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:44.434 14:01:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:44.434 14:01:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:44.434 14:01:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:44.434 14:01:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:44.434 14:01:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:44.434 14:01:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:44.434 14:01:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:44.434 14:01:35 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:19:44.434 14:01:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:44.434 14:01:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.434 14:01:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:44.434 14:01:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:44.434 14:01:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:44.434 14:01:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.434 14:01:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:44.434 14:01:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.434 14:01:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:44.434 14:01:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:44.434 14:01:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:44.434 14:01:35 -- common/autotest_common.sh@10 -- # set +x 00:19:49.713 14:01:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.713 14:01:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.714 14:01:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.714 14:01:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.714 14:01:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.714 14:01:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.714 14:01:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.714 14:01:40 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.714 14:01:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.714 14:01:40 -- nvmf/common.sh@295 -- # e810=() 00:19:49.714 14:01:40 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.714 14:01:40 -- nvmf/common.sh@296 -- # x722=() 00:19:49.714 14:01:40 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.714 14:01:40 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.714 14:01:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.714 14:01:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.714 14:01:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.714 14:01:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:49.714 14:01:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.714 14:01:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.714 14:01:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:49.714 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:49.714 14:01:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.714 14:01:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:49.714 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:49.714 14:01:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.714 14:01:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.714 14:01:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.714 14:01:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.714 14:01:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.714 14:01:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:49.714 Found net devices under 0000:86:00.0: cvl_0_0 00:19:49.714 14:01:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.714 14:01:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.714 14:01:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.714 14:01:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.714 14:01:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.714 14:01:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:49.714 Found net devices under 0000:86:00.1: cvl_0_1 00:19:49.714 14:01:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.714 14:01:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.714 14:01:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.714 14:01:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:49.714 14:01:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:49.714 14:01:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.714 14:01:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:49.714 14:01:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:49.714 14:01:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:49.714 14:01:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:49.714 14:01:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:49.714 14:01:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:49.714 14:01:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:49.714 14:01:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.714 14:01:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:49.714 14:01:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:49.714 14:01:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:49.714 14:01:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.714 14:01:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.714 14:01:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.714 14:01:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:49.714 14:01:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.714 14:01:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.714 14:01:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.714 14:01:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:49.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:19:49.714 00:19:49.714 --- 10.0.0.2 ping statistics --- 00:19:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.714 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:19:49.714 14:01:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:19:49.714 00:19:49.714 --- 10.0.0.1 ping statistics --- 00:19:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.714 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:19:49.714 14:01:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.714 14:01:40 -- nvmf/common.sh@410 -- # return 0 00:19:49.715 14:01:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.715 14:01:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.715 14:01:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:49.715 14:01:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:49.715 14:01:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.715 14:01:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:49.715 14:01:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:49.715 14:01:40 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=3299365 00:19:49.715 14:01:40 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:49.715 14:01:40 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:49.715 14:01:40 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 3299365 00:19:49.715 14:01:40 -- common/autotest_common.sh@819 -- # '[' -z 3299365 ']' 00:19:49.715 14:01:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.715 14:01:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:49.715 14:01:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.715 14:01:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:49.715 14:01:40 -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 14:01:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.653 14:01:41 -- common/autotest_common.sh@852 -- # return 0 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:50.653 14:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.653 14:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 14:01:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:19:50.653 14:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.653 14:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 Malloc0 00:19:50.653 14:01:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:50.653 14:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.653 14:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 14:01:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:50.653 14:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.653 14:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 14:01:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:50.653 14:01:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.653 14:01:41 -- common/autotest_common.sh@10 -- # set +x 00:19:50.653 14:01:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:19:50.653 14:01:41 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:20:22.757 Fuzzing completed. Shutting down the fuzz application 00:20:22.757 00:20:22.757 Dumping successful admin opcodes: 00:20:22.757 8, 9, 10, 24, 00:20:22.757 Dumping successful io opcodes: 00:20:22.757 0, 9, 00:20:22.757 NS: 0x200003aeff00 I/O qp, Total commands completed: 896541, total successful commands: 5222, random_seed: 596678528 00:20:22.757 NS: 0x200003aeff00 admin qp, Total commands completed: 106572, total successful commands: 874, random_seed: 3408207616 00:20:22.757 14:02:11 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:20:22.757 Fuzzing completed. Shutting down the fuzz application 00:20:22.757 00:20:22.757 Dumping successful admin opcodes: 00:20:22.757 24, 00:20:22.757 Dumping successful io opcodes: 00:20:22.757 00:20:22.757 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 245913334 00:20:22.757 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 246007384 00:20:22.757 14:02:13 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:22.757 14:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:22.757 14:02:13 -- common/autotest_common.sh@10 -- # set +x 00:20:22.757 14:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:22.757 14:02:13 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:22.757 14:02:13 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:20:22.757 14:02:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:22.757 14:02:13 -- nvmf/common.sh@116 -- # sync 00:20:22.757 14:02:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:22.757 14:02:13 -- nvmf/common.sh@119 -- # set +e 00:20:22.757 14:02:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:22.757 14:02:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:22.757 rmmod nvme_tcp 00:20:22.757 rmmod nvme_fabrics 00:20:22.757 rmmod nvme_keyring 00:20:22.757 14:02:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:22.757 14:02:13 -- nvmf/common.sh@123 -- # set -e 00:20:22.757 14:02:13 -- nvmf/common.sh@124 -- # return 0 00:20:22.757 14:02:13 -- nvmf/common.sh@477 -- # '[' -n 3299365 ']' 00:20:22.757 14:02:13 -- nvmf/common.sh@478 -- # killprocess 3299365 00:20:22.757 14:02:13 -- common/autotest_common.sh@926 -- # '[' -z 3299365 ']' 00:20:22.757 14:02:13 -- common/autotest_common.sh@930 -- # kill -0 3299365 00:20:22.757 14:02:13 -- common/autotest_common.sh@931 -- # uname 00:20:22.757 14:02:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.757 14:02:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3299365 00:20:22.757 14:02:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:22.757 14:02:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:22.757 14:02:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3299365' 00:20:22.757 killing process with pid 3299365 00:20:22.757 14:02:13 -- common/autotest_common.sh@945 -- # kill 3299365 00:20:22.757 14:02:13 -- common/autotest_common.sh@950 -- # wait 3299365 00:20:22.757 14:02:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:22.757 14:02:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:22.757 14:02:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:22.757 14:02:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.757 14:02:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:22.757 14:02:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.757 14:02:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:22.757 14:02:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.660 14:02:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:24.660 14:02:15 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:20:24.660 00:20:24.660 real 0m40.612s 00:20:24.660 user 0m53.389s 00:20:24.660 sys 0m16.736s 00:20:24.660 14:02:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:24.660 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:20:24.660 ************************************ 00:20:24.660 END TEST nvmf_fuzz 00:20:24.660 ************************************ 00:20:24.660 14:02:15 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:24.660 14:02:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:24.660 14:02:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:24.660 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:20:24.660 ************************************ 00:20:24.660 START TEST nvmf_multiconnection 00:20:24.660 ************************************ 00:20:24.660 14:02:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:20:24.920 * Looking for test storage... 00:20:24.920 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:24.920 14:02:15 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.920 14:02:15 -- nvmf/common.sh@7 -- # uname -s 00:20:24.920 14:02:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.920 14:02:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.920 14:02:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.920 14:02:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.920 14:02:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.920 14:02:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.920 14:02:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.920 14:02:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.920 14:02:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.920 14:02:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.920 14:02:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.920 14:02:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:24.920 14:02:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.920 14:02:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.920 14:02:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.920 14:02:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.920 14:02:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.920 14:02:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.920 14:02:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.920 14:02:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.920 14:02:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.920 14:02:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.920 14:02:15 -- paths/export.sh@5 -- # export PATH 00:20:24.920 14:02:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.920 14:02:15 -- nvmf/common.sh@46 -- # : 0 00:20:24.920 14:02:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:24.920 14:02:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:24.920 14:02:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:24.920 14:02:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.920 14:02:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.920 14:02:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:24.920 14:02:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:24.920 14:02:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:24.920 14:02:15 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.920 14:02:15 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.920 14:02:15 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:20:24.920 14:02:15 -- target/multiconnection.sh@16 -- # nvmftestinit 00:20:24.920 14:02:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:24.920 14:02:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.920 14:02:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:24.920 14:02:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:24.920 14:02:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:24.920 14:02:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.920 14:02:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:24.920 14:02:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.920 14:02:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:24.920 14:02:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:24.920 14:02:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:24.920 14:02:15 -- common/autotest_common.sh@10 -- # set +x 00:20:30.193 14:02:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:30.193 14:02:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:30.193 14:02:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:30.193 14:02:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:30.193 14:02:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:30.193 14:02:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:30.193 14:02:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:30.193 14:02:21 -- nvmf/common.sh@294 -- # net_devs=() 00:20:30.193 14:02:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:30.193 14:02:21 -- nvmf/common.sh@295 -- # e810=() 00:20:30.193 14:02:21 -- nvmf/common.sh@295 -- # local -ga e810 00:20:30.193 14:02:21 -- nvmf/common.sh@296 -- # x722=() 00:20:30.193 14:02:21 -- nvmf/common.sh@296 -- # local -ga x722 00:20:30.193 14:02:21 -- nvmf/common.sh@297 -- # mlx=() 00:20:30.193 14:02:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:30.193 14:02:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:30.193 14:02:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:30.193 14:02:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:30.193 14:02:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:30.193 14:02:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:30.193 14:02:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:30.193 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:30.193 14:02:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:30.193 14:02:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:30.193 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:30.193 14:02:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:30.193 14:02:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:30.193 14:02:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.193 14:02:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:30.193 14:02:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.193 14:02:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:30.193 Found net devices under 0000:86:00.0: cvl_0_0 00:20:30.193 14:02:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.193 14:02:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:30.193 14:02:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:30.193 14:02:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:30.193 14:02:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:30.193 14:02:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:30.193 Found net devices under 0000:86:00.1: cvl_0_1 00:20:30.193 14:02:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:30.193 14:02:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:30.193 14:02:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:30.193 14:02:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:20:30.193 14:02:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:20:30.193 14:02:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:30.193 14:02:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:30.193 14:02:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:30.193 14:02:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:20:30.193 14:02:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:30.193 14:02:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:30.194 14:02:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:20:30.194 14:02:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:30.194 14:02:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:30.194 14:02:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:20:30.194 14:02:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:20:30.194 14:02:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:20:30.194 14:02:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:30.194 14:02:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:30.194 14:02:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:30.194 14:02:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:20:30.194 14:02:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:30.452 14:02:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:30.452 14:02:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:30.452 14:02:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:20:30.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:20:30.452 00:20:30.452 --- 10.0.0.2 ping statistics --- 00:20:30.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.452 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:20:30.452 14:02:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:30.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:20:30.452 00:20:30.452 --- 10.0.0.1 ping statistics --- 00:20:30.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.452 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:20:30.452 14:02:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.452 14:02:21 -- nvmf/common.sh@410 -- # return 0 00:20:30.452 14:02:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:30.452 14:02:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:30.452 14:02:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:30.452 14:02:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:30.452 14:02:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:30.452 14:02:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:30.452 14:02:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:30.452 14:02:21 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:20:30.452 14:02:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:30.452 14:02:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:30.452 14:02:21 -- common/autotest_common.sh@10 -- # set +x 00:20:30.452 14:02:21 -- nvmf/common.sh@469 -- # nvmfpid=3308256 00:20:30.452 14:02:21 -- nvmf/common.sh@470 -- # waitforlisten 3308256 00:20:30.452 14:02:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:30.452 14:02:21 -- common/autotest_common.sh@819 -- # '[' -z 3308256 ']' 00:20:30.452 14:02:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:30.452 14:02:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:30.452 14:02:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:30.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:30.453 14:02:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:30.453 14:02:21 -- common/autotest_common.sh@10 -- # set +x 00:20:30.453 [2024-07-23 14:02:21.368563] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:20:30.453 [2024-07-23 14:02:21.368603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:30.453 EAL: No free 2048 kB hugepages reported on node 1 00:20:30.453 [2024-07-23 14:02:21.425374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:30.711 [2024-07-23 14:02:21.504306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:30.711 [2024-07-23 14:02:21.504417] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:30.711 [2024-07-23 14:02:21.504424] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:30.711 [2024-07-23 14:02:21.504431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:30.711 [2024-07-23 14:02:21.504712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.711 [2024-07-23 14:02:21.504731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:30.711 [2024-07-23 14:02:21.504818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:30.711 [2024-07-23 14:02:21.504819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.279 14:02:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:31.279 14:02:22 -- common/autotest_common.sh@852 -- # return 0 00:20:31.279 14:02:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:31.279 14:02:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 14:02:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.279 14:02:22 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 [2024-07-23 14:02:22.214319] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.279 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.279 14:02:22 -- target/multiconnection.sh@21 -- # seq 1 11 00:20:31.279 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.279 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 Malloc1 00:20:31.279 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.279 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.279 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.279 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 [2024-07-23 14:02:22.270282] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:31.279 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.279 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.279 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.279 Malloc2 00:20:31.279 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.279 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:20:31.279 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.279 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.539 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 Malloc3 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.539 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.539 Malloc4 00:20:31.539 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.539 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:20:31.539 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.539 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.540 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 Malloc5 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.540 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 Malloc6 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.540 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 Malloc7 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.540 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.540 Malloc8 00:20:31.540 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.540 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:20:31.540 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.540 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.799 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 Malloc9 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.799 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 Malloc10 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:20:31.799 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.799 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.799 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.799 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:20:31.800 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.800 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.800 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:20:31.800 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.800 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.800 14:02:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.800 14:02:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:20:31.800 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.800 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 Malloc11 00:20:31.800 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.800 14:02:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:20:31.800 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.800 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.800 14:02:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:20:31.800 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.800 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.800 14:02:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:20:31.800 14:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:31.800 14:02:22 -- common/autotest_common.sh@10 -- # set +x 00:20:31.800 14:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:31.800 14:02:22 -- target/multiconnection.sh@28 -- # seq 1 11 00:20:31.800 14:02:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:31.800 14:02:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:33.171 14:02:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:20:33.171 14:02:23 -- common/autotest_common.sh@1177 -- # local i=0 00:20:33.171 14:02:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:33.171 14:02:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:33.171 14:02:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:35.076 14:02:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:35.076 14:02:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:35.076 14:02:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:20:35.076 14:02:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:35.076 14:02:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:35.076 14:02:25 -- common/autotest_common.sh@1187 -- # return 0 00:20:35.076 14:02:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:35.076 14:02:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:20:36.447 14:02:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:20:36.447 14:02:27 -- common/autotest_common.sh@1177 -- # local i=0 00:20:36.447 14:02:27 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:36.447 14:02:27 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:36.447 14:02:27 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:38.361 14:02:29 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:38.361 14:02:29 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:38.361 14:02:29 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:20:38.362 14:02:29 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:38.362 14:02:29 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:38.362 14:02:29 -- common/autotest_common.sh@1187 -- # return 0 00:20:38.362 14:02:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:38.362 14:02:29 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:20:39.308 14:02:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:20:39.308 14:02:30 -- common/autotest_common.sh@1177 -- # local i=0 00:20:39.308 14:02:30 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:39.308 14:02:30 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:39.308 14:02:30 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:41.207 14:02:32 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:41.207 14:02:32 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:41.207 14:02:32 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:20:41.465 14:02:32 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:41.465 14:02:32 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:41.465 14:02:32 -- common/autotest_common.sh@1187 -- # return 0 00:20:41.465 14:02:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:41.465 14:02:32 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:20:42.841 14:02:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:20:42.841 14:02:33 -- common/autotest_common.sh@1177 -- # local i=0 00:20:42.841 14:02:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:42.841 14:02:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:42.841 14:02:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:44.742 14:02:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:44.742 14:02:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:44.742 14:02:35 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:20:44.742 14:02:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:44.742 14:02:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:44.742 14:02:35 -- common/autotest_common.sh@1187 -- # return 0 00:20:44.742 14:02:35 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:44.742 14:02:35 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:20:45.676 14:02:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:20:45.676 14:02:36 -- common/autotest_common.sh@1177 -- # local i=0 00:20:45.676 14:02:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:45.676 14:02:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:45.676 14:02:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:48.205 14:02:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:48.205 14:02:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:48.205 14:02:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:20:48.205 14:02:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:48.205 14:02:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:48.205 14:02:38 -- common/autotest_common.sh@1187 -- # return 0 00:20:48.205 14:02:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:48.205 14:02:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:20:49.137 14:02:39 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:20:49.137 14:02:39 -- common/autotest_common.sh@1177 -- # local i=0 00:20:49.137 14:02:39 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:49.137 14:02:39 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:49.137 14:02:39 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:51.034 14:02:41 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:51.034 14:02:41 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:51.034 14:02:41 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:20:51.034 14:02:41 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:51.034 14:02:41 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:51.034 14:02:41 -- common/autotest_common.sh@1187 -- # return 0 00:20:51.034 14:02:41 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:51.034 14:02:41 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:20:52.410 14:02:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:20:52.410 14:02:43 -- common/autotest_common.sh@1177 -- # local i=0 00:20:52.410 14:02:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:52.410 14:02:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:52.410 14:02:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:54.312 14:02:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:54.312 14:02:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:54.312 14:02:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:20:54.312 14:02:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:54.312 14:02:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:54.312 14:02:45 -- common/autotest_common.sh@1187 -- # return 0 00:20:54.312 14:02:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:54.312 14:02:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:20:55.687 14:02:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:20:55.687 14:02:46 -- common/autotest_common.sh@1177 -- # local i=0 00:20:55.687 14:02:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:55.687 14:02:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:55.687 14:02:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:20:57.589 14:02:48 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:20:57.590 14:02:48 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:20:57.590 14:02:48 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:20:57.590 14:02:48 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:20:57.590 14:02:48 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:20:57.590 14:02:48 -- common/autotest_common.sh@1187 -- # return 0 00:20:57.590 14:02:48 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:20:57.590 14:02:48 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:20:59.026 14:02:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:20:59.026 14:02:49 -- common/autotest_common.sh@1177 -- # local i=0 00:20:59.026 14:02:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:20:59.026 14:02:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:20:59.026 14:02:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:00.929 14:02:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:00.929 14:02:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:00.929 14:02:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:21:01.187 14:02:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:01.187 14:02:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:01.187 14:02:51 -- common/autotest_common.sh@1187 -- # return 0 00:21:01.187 14:02:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:01.187 14:02:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:21:02.559 14:02:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:02.559 14:02:53 -- common/autotest_common.sh@1177 -- # local i=0 00:21:02.559 14:02:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:02.559 14:02:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:02.559 14:02:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:04.484 14:02:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:04.484 14:02:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:04.484 14:02:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:21:04.484 14:02:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:04.484 14:02:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:04.484 14:02:55 -- common/autotest_common.sh@1187 -- # return 0 00:21:04.484 14:02:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:04.484 14:02:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:21:05.858 14:02:56 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:21:05.858 14:02:56 -- common/autotest_common.sh@1177 -- # local i=0 00:21:05.858 14:02:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:05.858 14:02:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:05.858 14:02:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:07.759 14:02:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:07.759 14:02:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:07.759 14:02:58 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:21:07.759 14:02:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:07.760 14:02:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:07.760 14:02:58 -- common/autotest_common.sh@1187 -- # return 0 00:21:07.760 14:02:58 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:21:07.760 [global] 00:21:07.760 thread=1 00:21:07.760 invalidate=1 00:21:07.760 rw=read 00:21:07.760 time_based=1 00:21:07.760 runtime=10 00:21:07.760 ioengine=libaio 00:21:07.760 direct=1 00:21:07.760 bs=262144 00:21:07.760 iodepth=64 00:21:07.760 norandommap=1 00:21:07.760 numjobs=1 00:21:07.760 00:21:07.760 [job0] 00:21:07.760 filename=/dev/nvme0n1 00:21:07.760 [job1] 00:21:07.760 filename=/dev/nvme10n1 00:21:07.760 [job2] 00:21:07.760 filename=/dev/nvme1n1 00:21:07.760 [job3] 00:21:07.760 filename=/dev/nvme2n1 00:21:07.760 [job4] 00:21:07.760 filename=/dev/nvme3n1 00:21:07.760 [job5] 00:21:07.760 filename=/dev/nvme4n1 00:21:07.760 [job6] 00:21:07.760 filename=/dev/nvme5n1 00:21:07.760 [job7] 00:21:07.760 filename=/dev/nvme6n1 00:21:07.760 [job8] 00:21:07.760 filename=/dev/nvme7n1 00:21:07.760 [job9] 00:21:07.760 filename=/dev/nvme8n1 00:21:07.760 [job10] 00:21:07.760 filename=/dev/nvme9n1 00:21:08.018 Could not set queue depth (nvme0n1) 00:21:08.018 Could not set queue depth (nvme10n1) 00:21:08.018 Could not set queue depth (nvme1n1) 00:21:08.018 Could not set queue depth (nvme2n1) 00:21:08.018 Could not set queue depth (nvme3n1) 00:21:08.018 Could not set queue depth (nvme4n1) 00:21:08.018 Could not set queue depth (nvme5n1) 00:21:08.018 Could not set queue depth (nvme6n1) 00:21:08.018 Could not set queue depth (nvme7n1) 00:21:08.018 Could not set queue depth (nvme8n1) 00:21:08.018 Could not set queue depth (nvme9n1) 00:21:08.277 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:08.277 fio-3.35 00:21:08.277 Starting 11 threads 00:21:20.485 00:21:20.485 job0: (groupid=0, jobs=1): err= 0: pid=3314826: Tue Jul 23 14:03:09 2024 00:21:20.485 read: IOPS=754, BW=189MiB/s (198MB/s)(1912MiB/10128msec) 00:21:20.485 slat (usec): min=8, max=144713, avg=934.04, stdev=4591.25 00:21:20.485 clat (msec): min=4, max=327, avg=83.69, stdev=39.41 00:21:20.485 lat (msec): min=4, max=327, avg=84.62, stdev=39.95 00:21:20.485 clat percentiles (msec): 00:21:20.485 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 53], 00:21:20.485 | 30.00th=[ 65], 40.00th=[ 73], 50.00th=[ 81], 60.00th=[ 91], 00:21:20.485 | 70.00th=[ 100], 80.00th=[ 112], 90.00th=[ 134], 95.00th=[ 159], 00:21:20.485 | 99.00th=[ 197], 99.50th=[ 203], 99.90th=[ 262], 99.95th=[ 279], 00:21:20.485 | 99.99th=[ 330] 00:21:20.485 bw ( KiB/s): min=125440, max=281088, per=9.14%, avg=194083.35, stdev=44790.60, samples=20 00:21:20.485 iops : min= 490, max= 1098, avg=758.10, stdev=175.00, samples=20 00:21:20.485 lat (msec) : 10=0.69%, 20=3.87%, 50=13.71%, 100=52.37%, 250=29.22% 00:21:20.485 lat (msec) : 500=0.14% 00:21:20.485 cpu : usr=0.24%, sys=2.85%, ctx=2192, majf=0, minf=4097 00:21:20.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:20.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.485 issued rwts: total=7646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.485 job1: (groupid=0, jobs=1): err= 0: pid=3314844: Tue Jul 23 14:03:09 2024 00:21:20.485 read: IOPS=685, BW=171MiB/s (180MB/s)(1719MiB/10028msec) 00:21:20.485 slat (usec): min=8, max=133036, avg=1139.97, stdev=4942.03 00:21:20.485 clat (msec): min=5, max=311, avg=92.11, stdev=42.77 00:21:20.485 lat (msec): min=5, max=311, avg=93.25, stdev=43.52 00:21:20.485 clat percentiles (msec): 00:21:20.485 | 1.00th=[ 18], 5.00th=[ 29], 10.00th=[ 34], 20.00th=[ 48], 00:21:20.485 | 30.00th=[ 59], 40.00th=[ 79], 50.00th=[ 96], 60.00th=[ 110], 00:21:20.485 | 70.00th=[ 122], 80.00th=[ 129], 90.00th=[ 148], 95.00th=[ 159], 00:21:20.485 | 99.00th=[ 194], 99.50th=[ 205], 99.90th=[ 213], 99.95th=[ 218], 00:21:20.485 | 99.99th=[ 313] 00:21:20.485 bw ( KiB/s): min=98816, max=304128, per=8.22%, avg=174438.40, stdev=57361.34, samples=20 00:21:20.485 iops : min= 386, max= 1188, avg=681.40, stdev=224.07, samples=20 00:21:20.485 lat (msec) : 10=0.20%, 20=1.28%, 50=20.79%, 100=30.19%, 250=47.52% 00:21:20.485 lat (msec) : 500=0.01% 00:21:20.485 cpu : usr=0.23%, sys=2.46%, ctx=1724, majf=0, minf=4097 00:21:20.485 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:20.485 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.485 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.485 issued rwts: total=6877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.485 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.485 job2: (groupid=0, jobs=1): err= 0: pid=3314879: Tue Jul 23 14:03:09 2024 00:21:20.485 read: IOPS=639, BW=160MiB/s (168MB/s)(1620MiB/10128msec) 00:21:20.485 slat (usec): min=9, max=136198, avg=1383.74, stdev=4585.17 00:21:20.485 clat (msec): min=21, max=262, avg=98.44, stdev=37.74 00:21:20.485 lat (msec): min=22, max=262, avg=99.82, stdev=38.26 00:21:20.485 clat percentiles (msec): 00:21:20.485 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 45], 20.00th=[ 64], 00:21:20.485 | 30.00th=[ 78], 40.00th=[ 87], 50.00th=[ 101], 60.00th=[ 110], 00:21:20.485 | 70.00th=[ 121], 80.00th=[ 131], 90.00th=[ 144], 95.00th=[ 159], 00:21:20.485 | 99.00th=[ 190], 99.50th=[ 192], 99.90th=[ 218], 99.95th=[ 264], 00:21:20.486 | 99.99th=[ 264] 00:21:20.486 bw ( KiB/s): min=101888, max=302592, per=7.74%, avg=164233.45, stdev=55267.50, samples=20 00:21:20.486 iops : min= 398, max= 1182, avg=641.50, stdev=215.89, samples=20 00:21:20.486 lat (msec) : 50=12.36%, 100=37.85%, 250=49.71%, 500=0.08% 00:21:20.486 cpu : usr=0.24%, sys=2.38%, ctx=1448, majf=0, minf=4097 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=6480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job3: (groupid=0, jobs=1): err= 0: pid=3314903: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=852, BW=213MiB/s (224MB/s)(2158MiB/10123msec) 00:21:20.486 slat (usec): min=8, max=100202, avg=893.11, stdev=3616.21 00:21:20.486 clat (usec): min=1967, max=281904, avg=74071.62, stdev=36572.13 00:21:20.486 lat (usec): min=1998, max=281947, avg=74964.74, stdev=36945.74 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 31], 20.00th=[ 40], 00:21:20.486 | 30.00th=[ 52], 40.00th=[ 63], 50.00th=[ 74], 60.00th=[ 81], 00:21:20.486 | 70.00th=[ 90], 80.00th=[ 104], 90.00th=[ 123], 95.00th=[ 136], 00:21:20.486 | 99.00th=[ 176], 99.50th=[ 194], 99.90th=[ 243], 99.95th=[ 251], 00:21:20.486 | 99.99th=[ 284] 00:21:20.486 bw ( KiB/s): min=131584, max=360960, per=10.33%, avg=219300.85, stdev=66845.98, samples=20 00:21:20.486 iops : min= 514, max= 1410, avg=856.60, stdev=261.17, samples=20 00:21:20.486 lat (msec) : 2=0.01%, 4=0.20%, 10=1.41%, 20=2.35%, 50=24.84% 00:21:20.486 lat (msec) : 100=49.03%, 250=22.09%, 500=0.06% 00:21:20.486 cpu : usr=0.25%, sys=3.06%, ctx=2197, majf=0, minf=4097 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=8631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job4: (groupid=0, jobs=1): err= 0: pid=3314916: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=800, BW=200MiB/s (210MB/s)(2024MiB/10117msec) 00:21:20.486 slat (usec): min=7, max=104291, avg=884.57, stdev=3685.84 00:21:20.486 clat (usec): min=1537, max=283413, avg=78948.09, stdev=38548.15 00:21:20.486 lat (usec): min=1578, max=283462, avg=79832.66, stdev=39014.30 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 38], 20.00th=[ 43], 00:21:20.486 | 30.00th=[ 51], 40.00th=[ 64], 50.00th=[ 77], 60.00th=[ 88], 00:21:20.486 | 70.00th=[ 101], 80.00th=[ 116], 90.00th=[ 130], 95.00th=[ 142], 00:21:20.486 | 99.00th=[ 165], 99.50th=[ 190], 99.90th=[ 266], 99.95th=[ 271], 00:21:20.486 | 99.99th=[ 284] 00:21:20.486 bw ( KiB/s): min=113664, max=382464, per=9.68%, avg=205527.80, stdev=68456.48, samples=20 00:21:20.486 iops : min= 444, max= 1494, avg=802.80, stdev=267.41, samples=20 00:21:20.486 lat (msec) : 2=0.09%, 4=0.19%, 10=1.38%, 20=2.51%, 50=25.30% 00:21:20.486 lat (msec) : 100=40.54%, 250=29.79%, 500=0.21% 00:21:20.486 cpu : usr=0.38%, sys=2.92%, ctx=2095, majf=0, minf=3347 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=8094,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job5: (groupid=0, jobs=1): err= 0: pid=3314961: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=863, BW=216MiB/s (226MB/s)(2183MiB/10113msec) 00:21:20.486 slat (usec): min=9, max=193530, avg=915.16, stdev=4289.30 00:21:20.486 clat (usec): min=1318, max=359334, avg=73100.53, stdev=43104.40 00:21:20.486 lat (usec): min=1335, max=359361, avg=74015.69, stdev=43450.50 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 15], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 40], 00:21:20.486 | 30.00th=[ 48], 40.00th=[ 56], 50.00th=[ 65], 60.00th=[ 73], 00:21:20.486 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 124], 95.00th=[ 150], 00:21:20.486 | 99.00th=[ 239], 99.50th=[ 342], 99.90th=[ 355], 99.95th=[ 359], 00:21:20.486 | 99.99th=[ 359] 00:21:20.486 bw ( KiB/s): min=87552, max=421376, per=10.45%, avg=221834.65, stdev=89821.36, samples=20 00:21:20.486 iops : min= 342, max= 1646, avg=866.50, stdev=350.78, samples=20 00:21:20.486 lat (msec) : 2=0.03%, 4=0.05%, 10=0.45%, 20=1.28%, 50=31.01% 00:21:20.486 lat (msec) : 100=48.20%, 250=18.44%, 500=0.54% 00:21:20.486 cpu : usr=0.17%, sys=2.79%, ctx=1986, majf=0, minf=4097 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=8730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job6: (groupid=0, jobs=1): err= 0: pid=3314985: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=639, BW=160MiB/s (168MB/s)(1617MiB/10118msec) 00:21:20.486 slat (usec): min=9, max=88637, avg=1289.74, stdev=4624.83 00:21:20.486 clat (msec): min=6, max=276, avg=98.67, stdev=37.11 00:21:20.486 lat (msec): min=7, max=283, avg=99.96, stdev=37.78 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 20], 5.00th=[ 34], 10.00th=[ 42], 20.00th=[ 66], 00:21:20.486 | 30.00th=[ 84], 40.00th=[ 95], 50.00th=[ 104], 60.00th=[ 111], 00:21:20.486 | 70.00th=[ 118], 80.00th=[ 128], 90.00th=[ 140], 95.00th=[ 153], 00:21:20.486 | 99.00th=[ 180], 99.50th=[ 197], 99.90th=[ 271], 99.95th=[ 275], 00:21:20.486 | 99.99th=[ 275] 00:21:20.486 bw ( KiB/s): min=97792, max=275456, per=7.72%, avg=163978.90, stdev=45672.60, samples=20 00:21:20.486 iops : min= 382, max= 1076, avg=640.50, stdev=178.43, samples=20 00:21:20.486 lat (msec) : 10=0.12%, 20=0.88%, 50=11.55%, 100=33.37%, 250=53.75% 00:21:20.486 lat (msec) : 500=0.32% 00:21:20.486 cpu : usr=0.23%, sys=2.69%, ctx=1692, majf=0, minf=4097 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=6469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job7: (groupid=0, jobs=1): err= 0: pid=3315004: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=1000, BW=250MiB/s (262MB/s)(2505MiB/10018msec) 00:21:20.486 slat (usec): min=10, max=94003, avg=797.37, stdev=2885.65 00:21:20.486 clat (msec): min=3, max=189, avg=63.11, stdev=33.69 00:21:20.486 lat (msec): min=3, max=221, avg=63.91, stdev=34.06 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 11], 5.00th=[ 18], 10.00th=[ 28], 20.00th=[ 35], 00:21:20.486 | 30.00th=[ 40], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 65], 00:21:20.486 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 111], 95.00th=[ 125], 00:21:20.486 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 186], 00:21:20.486 | 99.99th=[ 190] 00:21:20.486 bw ( KiB/s): min=132608, max=460800, per=12.01%, avg=254880.50, stdev=94477.03, samples=20 00:21:20.486 iops : min= 518, max= 1800, avg=995.60, stdev=369.07, samples=20 00:21:20.486 lat (msec) : 4=0.03%, 10=0.81%, 20=5.93%, 50=36.82%, 100=41.61% 00:21:20.486 lat (msec) : 250=14.81% 00:21:20.486 cpu : usr=0.43%, sys=3.75%, ctx=2386, majf=0, minf=4097 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=10020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job8: (groupid=0, jobs=1): err= 0: pid=3315059: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=664, BW=166MiB/s (174MB/s)(1684MiB/10129msec) 00:21:20.486 slat (usec): min=8, max=146050, avg=1025.25, stdev=4366.23 00:21:20.486 clat (msec): min=2, max=304, avg=95.08, stdev=38.85 00:21:20.486 lat (msec): min=2, max=304, avg=96.11, stdev=39.24 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 20], 5.00th=[ 37], 10.00th=[ 52], 20.00th=[ 63], 00:21:20.486 | 30.00th=[ 73], 40.00th=[ 82], 50.00th=[ 92], 60.00th=[ 105], 00:21:20.486 | 70.00th=[ 113], 80.00th=[ 126], 90.00th=[ 144], 95.00th=[ 161], 00:21:20.486 | 99.00th=[ 194], 99.50th=[ 271], 99.90th=[ 288], 99.95th=[ 288], 00:21:20.486 | 99.99th=[ 305] 00:21:20.486 bw ( KiB/s): min=106496, max=257024, per=8.04%, avg=170781.40, stdev=36370.81, samples=20 00:21:20.486 iops : min= 416, max= 1004, avg=667.10, stdev=142.05, samples=20 00:21:20.486 lat (msec) : 4=0.01%, 10=0.45%, 20=0.64%, 50=8.14%, 100=47.62% 00:21:20.486 lat (msec) : 250=42.64%, 500=0.50% 00:21:20.486 cpu : usr=0.25%, sys=2.54%, ctx=1911, majf=0, minf=4097 00:21:20.486 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:20.486 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.486 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.486 issued rwts: total=6735,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.486 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.486 job9: (groupid=0, jobs=1): err= 0: pid=3315061: Tue Jul 23 14:03:09 2024 00:21:20.486 read: IOPS=610, BW=153MiB/s (160MB/s)(1544MiB/10123msec) 00:21:20.486 slat (usec): min=9, max=149393, avg=1184.33, stdev=4887.02 00:21:20.486 clat (msec): min=7, max=288, avg=103.60, stdev=39.59 00:21:20.486 lat (msec): min=7, max=289, avg=104.79, stdev=40.15 00:21:20.486 clat percentiles (msec): 00:21:20.486 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 47], 20.00th=[ 67], 00:21:20.486 | 30.00th=[ 83], 40.00th=[ 100], 50.00th=[ 109], 60.00th=[ 117], 00:21:20.486 | 70.00th=[ 126], 80.00th=[ 133], 90.00th=[ 148], 95.00th=[ 167], 00:21:20.486 | 99.00th=[ 190], 99.50th=[ 197], 99.90th=[ 275], 99.95th=[ 275], 00:21:20.486 | 99.99th=[ 288] 00:21:20.486 bw ( KiB/s): min=79872, max=239616, per=7.37%, avg=156492.80, stdev=42402.97, samples=20 00:21:20.486 iops : min= 312, max= 936, avg=611.30, stdev=165.64, samples=20 00:21:20.487 lat (msec) : 10=0.10%, 20=0.89%, 50=10.04%, 100=29.83%, 250=58.87% 00:21:20.487 lat (msec) : 500=0.28% 00:21:20.487 cpu : usr=0.25%, sys=2.24%, ctx=1677, majf=0, minf=4097 00:21:20.487 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:21:20.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.487 issued rwts: total=6176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.487 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.487 job10: (groupid=0, jobs=1): err= 0: pid=3315064: Tue Jul 23 14:03:09 2024 00:21:20.487 read: IOPS=811, BW=203MiB/s (213MB/s)(2034MiB/10026msec) 00:21:20.487 slat (usec): min=8, max=159530, avg=930.19, stdev=4615.37 00:21:20.487 clat (msec): min=4, max=237, avg=77.83, stdev=43.31 00:21:20.487 lat (msec): min=4, max=309, avg=78.76, stdev=43.90 00:21:20.487 clat percentiles (msec): 00:21:20.487 | 1.00th=[ 10], 5.00th=[ 22], 10.00th=[ 33], 20.00th=[ 42], 00:21:20.487 | 30.00th=[ 47], 40.00th=[ 54], 50.00th=[ 69], 60.00th=[ 85], 00:21:20.487 | 70.00th=[ 99], 80.00th=[ 114], 90.00th=[ 138], 95.00th=[ 163], 00:21:20.487 | 99.00th=[ 203], 99.50th=[ 211], 99.90th=[ 213], 99.95th=[ 213], 00:21:20.487 | 99.99th=[ 239] 00:21:20.487 bw ( KiB/s): min=99840, max=376832, per=9.74%, avg=206664.35, stdev=82214.41, samples=20 00:21:20.487 iops : min= 390, max= 1472, avg=807.20, stdev=321.23, samples=20 00:21:20.487 lat (msec) : 10=1.12%, 20=3.31%, 50=30.86%, 100=36.46%, 250=28.26% 00:21:20.487 cpu : usr=0.29%, sys=2.97%, ctx=2047, majf=0, minf=4097 00:21:20.487 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:21:20.487 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:20.487 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:20.487 issued rwts: total=8136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:20.487 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:20.487 00:21:20.487 Run status group 0 (all jobs): 00:21:20.487 READ: bw=2073MiB/s (2174MB/s), 153MiB/s-250MiB/s (160MB/s-262MB/s), io=20.5GiB (22.0GB), run=10018-10129msec 00:21:20.487 00:21:20.487 Disk stats (read/write): 00:21:20.487 nvme0n1: ios=15215/0, merge=0/0, ticks=1247151/0, in_queue=1247151, util=94.33% 00:21:20.487 nvme10n1: ios=13257/0, merge=0/0, ticks=1223404/0, in_queue=1223404, util=94.62% 00:21:20.487 nvme1n1: ios=12858/0, merge=0/0, ticks=1234729/0, in_queue=1234729, util=95.36% 00:21:20.487 nvme2n1: ios=17217/0, merge=0/0, ticks=1251831/0, in_queue=1251831, util=95.76% 00:21:20.487 nvme3n1: ios=16158/0, merge=0/0, ticks=1251789/0, in_queue=1251789, util=95.91% 00:21:20.487 nvme4n1: ios=17459/0, merge=0/0, ticks=1248765/0, in_queue=1248765, util=96.79% 00:21:20.487 nvme5n1: ios=12913/0, merge=0/0, ticks=1246692/0, in_queue=1246692, util=97.15% 00:21:20.487 nvme6n1: ios=19465/0, merge=0/0, ticks=1220987/0, in_queue=1220987, util=97.40% 00:21:20.487 nvme7n1: ios=13418/0, merge=0/0, ticks=1249661/0, in_queue=1249661, util=98.42% 00:21:20.487 nvme8n1: ios=12306/0, merge=0/0, ticks=1252079/0, in_queue=1252079, util=98.88% 00:21:20.487 nvme9n1: ios=15751/0, merge=0/0, ticks=1227770/0, in_queue=1227770, util=99.18% 00:21:20.487 14:03:10 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:21:20.487 [global] 00:21:20.487 thread=1 00:21:20.487 invalidate=1 00:21:20.487 rw=randwrite 00:21:20.487 time_based=1 00:21:20.487 runtime=10 00:21:20.487 ioengine=libaio 00:21:20.487 direct=1 00:21:20.487 bs=262144 00:21:20.487 iodepth=64 00:21:20.487 norandommap=1 00:21:20.487 numjobs=1 00:21:20.487 00:21:20.487 [job0] 00:21:20.487 filename=/dev/nvme0n1 00:21:20.487 [job1] 00:21:20.487 filename=/dev/nvme10n1 00:21:20.487 [job2] 00:21:20.487 filename=/dev/nvme1n1 00:21:20.487 [job3] 00:21:20.487 filename=/dev/nvme2n1 00:21:20.487 [job4] 00:21:20.487 filename=/dev/nvme3n1 00:21:20.487 [job5] 00:21:20.487 filename=/dev/nvme4n1 00:21:20.487 [job6] 00:21:20.487 filename=/dev/nvme5n1 00:21:20.487 [job7] 00:21:20.487 filename=/dev/nvme6n1 00:21:20.487 [job8] 00:21:20.487 filename=/dev/nvme7n1 00:21:20.487 [job9] 00:21:20.487 filename=/dev/nvme8n1 00:21:20.487 [job10] 00:21:20.487 filename=/dev/nvme9n1 00:21:20.487 Could not set queue depth (nvme0n1) 00:21:20.487 Could not set queue depth (nvme10n1) 00:21:20.487 Could not set queue depth (nvme1n1) 00:21:20.487 Could not set queue depth (nvme2n1) 00:21:20.487 Could not set queue depth (nvme3n1) 00:21:20.487 Could not set queue depth (nvme4n1) 00:21:20.487 Could not set queue depth (nvme5n1) 00:21:20.487 Could not set queue depth (nvme6n1) 00:21:20.487 Could not set queue depth (nvme7n1) 00:21:20.487 Could not set queue depth (nvme8n1) 00:21:20.487 Could not set queue depth (nvme9n1) 00:21:20.487 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:21:20.487 fio-3.35 00:21:20.487 Starting 11 threads 00:21:30.464 00:21:30.464 job0: (groupid=0, jobs=1): err= 0: pid=3317124: Tue Jul 23 14:03:21 2024 00:21:30.464 write: IOPS=306, BW=76.7MiB/s (80.4MB/s)(793MiB/10335msec); 0 zone resets 00:21:30.464 slat (usec): min=19, max=143928, avg=2440.66, stdev=9981.59 00:21:30.464 clat (msec): min=5, max=1414, avg=205.93, stdev=206.19 00:21:30.464 lat (msec): min=5, max=1414, avg=208.37, stdev=208.77 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 28], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 72], 00:21:30.464 | 30.00th=[ 89], 40.00th=[ 115], 50.00th=[ 138], 60.00th=[ 169], 00:21:30.464 | 70.00th=[ 241], 80.00th=[ 292], 90.00th=[ 351], 95.00th=[ 768], 00:21:30.464 | 99.00th=[ 995], 99.50th=[ 1234], 99.90th=[ 1385], 99.95th=[ 1418], 00:21:30.464 | 99.99th=[ 1418] 00:21:30.464 bw ( KiB/s): min=12288, max=240609, per=6.17%, avg=79563.25, stdev=60547.48, samples=20 00:21:30.464 iops : min= 48, max= 939, avg=310.75, stdev=236.39, samples=20 00:21:30.464 lat (msec) : 10=0.06%, 20=0.38%, 50=5.71%, 100=28.29%, 250=36.87% 00:21:30.464 lat (msec) : 500=21.67%, 750=1.67%, 1000=4.38%, 2000=0.98% 00:21:30.464 cpu : usr=0.80%, sys=1.00%, ctx=1706, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,3171,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job1: (groupid=0, jobs=1): err= 0: pid=3317126: Tue Jul 23 14:03:21 2024 00:21:30.464 write: IOPS=403, BW=101MiB/s (106MB/s)(1037MiB/10272msec); 0 zone resets 00:21:30.464 slat (usec): min=19, max=96143, avg=2010.57, stdev=5297.55 00:21:30.464 clat (msec): min=8, max=558, avg=156.39, stdev=91.67 00:21:30.464 lat (msec): min=8, max=558, avg=158.40, stdev=92.83 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 18], 5.00th=[ 39], 10.00th=[ 57], 20.00th=[ 90], 00:21:30.464 | 30.00th=[ 106], 40.00th=[ 117], 50.00th=[ 134], 60.00th=[ 159], 00:21:30.464 | 70.00th=[ 188], 80.00th=[ 224], 90.00th=[ 275], 95.00th=[ 317], 00:21:30.464 | 99.00th=[ 485], 99.50th=[ 518], 99.90th=[ 550], 99.95th=[ 558], 00:21:30.464 | 99.99th=[ 558] 00:21:30.464 bw ( KiB/s): min=30720, max=175616, per=8.10%, avg=104550.40, stdev=41201.01, samples=20 00:21:30.464 iops : min= 120, max= 686, avg=408.40, stdev=160.94, samples=20 00:21:30.464 lat (msec) : 10=0.07%, 20=1.30%, 50=7.04%, 100=18.13%, 250=58.24% 00:21:30.464 lat (msec) : 500=14.44%, 750=0.77% 00:21:30.464 cpu : usr=1.11%, sys=1.15%, ctx=1961, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,4148,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job2: (groupid=0, jobs=1): err= 0: pid=3317128: Tue Jul 23 14:03:21 2024 00:21:30.464 write: IOPS=413, BW=103MiB/s (108MB/s)(1063MiB/10270msec); 0 zone resets 00:21:30.464 slat (usec): min=25, max=87683, avg=2274.85, stdev=5307.98 00:21:30.464 clat (msec): min=20, max=618, avg=152.26, stdev=94.18 00:21:30.464 lat (msec): min=20, max=618, avg=154.53, stdev=95.40 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 58], 5.00th=[ 62], 10.00th=[ 64], 20.00th=[ 77], 00:21:30.464 | 30.00th=[ 90], 40.00th=[ 115], 50.00th=[ 128], 60.00th=[ 142], 00:21:30.464 | 70.00th=[ 161], 80.00th=[ 215], 90.00th=[ 284], 95.00th=[ 326], 00:21:30.464 | 99.00th=[ 514], 99.50th=[ 542], 99.90th=[ 592], 99.95th=[ 592], 00:21:30.464 | 99.99th=[ 617] 00:21:30.464 bw ( KiB/s): min=30720, max=248832, per=8.31%, avg=107161.60, stdev=52385.99, samples=20 00:21:30.464 iops : min= 120, max= 972, avg=418.60, stdev=204.63, samples=20 00:21:30.464 lat (msec) : 50=0.78%, 100=32.99%, 250=50.52%, 500=14.45%, 750=1.27% 00:21:30.464 cpu : usr=1.49%, sys=1.29%, ctx=1239, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,4250,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job3: (groupid=0, jobs=1): err= 0: pid=3317130: Tue Jul 23 14:03:21 2024 00:21:30.464 write: IOPS=463, BW=116MiB/s (122MB/s)(1165MiB/10047msec); 0 zone resets 00:21:30.464 slat (usec): min=21, max=65473, avg=1763.27, stdev=4406.61 00:21:30.464 clat (msec): min=6, max=394, avg=136.15, stdev=70.46 00:21:30.464 lat (msec): min=6, max=394, avg=137.92, stdev=71.32 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 15], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 55], 00:21:30.464 | 30.00th=[ 93], 40.00th=[ 117], 50.00th=[ 142], 60.00th=[ 159], 00:21:30.464 | 70.00th=[ 180], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 232], 00:21:30.464 | 99.00th=[ 338], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 393], 00:21:30.464 | 99.99th=[ 397] 00:21:30.464 bw ( KiB/s): min=53248, max=301056, per=9.12%, avg=117683.20, stdev=57150.05, samples=20 00:21:30.464 iops : min= 208, max= 1176, avg=459.70, stdev=223.24, samples=20 00:21:30.464 lat (msec) : 10=0.17%, 20=2.58%, 50=10.06%, 100=21.93%, 250=60.73% 00:21:30.464 lat (msec) : 500=4.53% 00:21:30.464 cpu : usr=1.52%, sys=1.60%, ctx=1985, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,4660,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.464 job4: (groupid=0, jobs=1): err= 0: pid=3317141: Tue Jul 23 14:03:21 2024 00:21:30.464 write: IOPS=468, BW=117MiB/s (123MB/s)(1186MiB/10136msec); 0 zone resets 00:21:30.464 slat (usec): min=19, max=94476, avg=1656.03, stdev=5041.19 00:21:30.464 clat (msec): min=5, max=387, avg=135.00, stdev=77.03 00:21:30.464 lat (msec): min=7, max=387, avg=136.65, stdev=78.07 00:21:30.464 clat percentiles (msec): 00:21:30.464 | 1.00th=[ 18], 5.00th=[ 31], 10.00th=[ 48], 20.00th=[ 65], 00:21:30.464 | 30.00th=[ 77], 40.00th=[ 104], 50.00th=[ 130], 60.00th=[ 148], 00:21:30.464 | 70.00th=[ 165], 80.00th=[ 201], 90.00th=[ 249], 95.00th=[ 279], 00:21:30.464 | 99.00th=[ 342], 99.50th=[ 359], 99.90th=[ 380], 99.95th=[ 384], 00:21:30.464 | 99.99th=[ 388] 00:21:30.464 bw ( KiB/s): min=48640, max=283648, per=9.29%, avg=119833.60, stdev=58120.81, samples=20 00:21:30.464 iops : min= 190, max= 1108, avg=468.10, stdev=227.03, samples=20 00:21:30.464 lat (msec) : 10=0.17%, 20=1.60%, 50=9.17%, 100=27.66%, 250=51.71% 00:21:30.464 lat (msec) : 500=9.70% 00:21:30.464 cpu : usr=0.92%, sys=1.47%, ctx=2391, majf=0, minf=1 00:21:30.464 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:30.464 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.464 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.464 issued rwts: total=0,4744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.464 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job5: (groupid=0, jobs=1): err= 0: pid=3317143: Tue Jul 23 14:03:21 2024 00:21:30.465 write: IOPS=402, BW=101MiB/s (105MB/s)(1030MiB/10247msec); 0 zone resets 00:21:30.465 slat (usec): min=26, max=129626, avg=2058.26, stdev=5382.80 00:21:30.465 clat (msec): min=5, max=584, avg=156.56, stdev=68.31 00:21:30.465 lat (msec): min=5, max=584, avg=158.62, stdev=69.17 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 23], 5.00th=[ 49], 10.00th=[ 74], 20.00th=[ 116], 00:21:30.465 | 30.00th=[ 130], 40.00th=[ 140], 50.00th=[ 150], 60.00th=[ 163], 00:21:30.465 | 70.00th=[ 180], 80.00th=[ 199], 90.00th=[ 230], 95.00th=[ 259], 00:21:30.465 | 99.00th=[ 418], 99.50th=[ 443], 99.90th=[ 550], 99.95th=[ 558], 00:21:30.465 | 99.99th=[ 584] 00:21:30.465 bw ( KiB/s): min=43520, max=180736, per=8.05%, avg=103869.45, stdev=31237.61, samples=20 00:21:30.465 iops : min= 170, max= 706, avg=405.70, stdev=122.02, samples=20 00:21:30.465 lat (msec) : 10=0.15%, 20=0.70%, 50=4.37%, 100=10.29%, 250=78.52% 00:21:30.465 lat (msec) : 500=5.73%, 750=0.24% 00:21:30.465 cpu : usr=1.19%, sys=1.41%, ctx=1815, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,4120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job6: (groupid=0, jobs=1): err= 0: pid=3317144: Tue Jul 23 14:03:21 2024 00:21:30.465 write: IOPS=669, BW=167MiB/s (176MB/s)(1697MiB/10131msec); 0 zone resets 00:21:30.465 slat (usec): min=17, max=91786, avg=1020.05, stdev=3227.74 00:21:30.465 clat (msec): min=4, max=293, avg=94.46, stdev=46.51 00:21:30.465 lat (msec): min=4, max=301, avg=95.48, stdev=46.88 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 17], 5.00th=[ 34], 10.00th=[ 48], 20.00th=[ 60], 00:21:30.465 | 30.00th=[ 65], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 95], 00:21:30.465 | 70.00th=[ 112], 80.00th=[ 134], 90.00th=[ 165], 95.00th=[ 188], 00:21:30.465 | 99.00th=[ 220], 99.50th=[ 253], 99.90th=[ 284], 99.95th=[ 292], 00:21:30.465 | 99.99th=[ 292] 00:21:30.465 bw ( KiB/s): min=88064, max=284672, per=13.34%, avg=172134.40, stdev=60348.73, samples=20 00:21:30.465 iops : min= 344, max= 1112, avg=672.40, stdev=235.74, samples=20 00:21:30.465 lat (msec) : 10=0.29%, 20=1.11%, 50=9.50%, 100=53.81%, 250=34.77% 00:21:30.465 lat (msec) : 500=0.52% 00:21:30.465 cpu : usr=1.38%, sys=2.00%, ctx=3339, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,6787,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job7: (groupid=0, jobs=1): err= 0: pid=3317146: Tue Jul 23 14:03:21 2024 00:21:30.465 write: IOPS=539, BW=135MiB/s (141MB/s)(1370MiB/10158msec); 0 zone resets 00:21:30.465 slat (usec): min=21, max=90088, avg=1217.20, stdev=4122.82 00:21:30.465 clat (msec): min=3, max=418, avg=117.39, stdev=70.12 00:21:30.465 lat (msec): min=3, max=418, avg=118.61, stdev=70.99 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 19], 5.00th=[ 32], 10.00th=[ 46], 20.00th=[ 57], 00:21:30.465 | 30.00th=[ 70], 40.00th=[ 87], 50.00th=[ 111], 60.00th=[ 125], 00:21:30.465 | 70.00th=[ 136], 80.00th=[ 157], 90.00th=[ 224], 95.00th=[ 266], 00:21:30.465 | 99.00th=[ 347], 99.50th=[ 363], 99.90th=[ 409], 99.95th=[ 418], 00:21:30.465 | 99.99th=[ 418] 00:21:30.465 bw ( KiB/s): min=51200, max=251392, per=10.75%, avg=138624.00, stdev=51368.65, samples=20 00:21:30.465 iops : min= 200, max= 982, avg=541.50, stdev=200.66, samples=20 00:21:30.465 lat (msec) : 4=0.02%, 10=0.26%, 20=1.20%, 50=10.29%, 100=33.97% 00:21:30.465 lat (msec) : 250=47.51%, 500=6.75% 00:21:30.465 cpu : usr=1.10%, sys=1.67%, ctx=3215, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,5479,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job8: (groupid=0, jobs=1): err= 0: pid=3317148: Tue Jul 23 14:03:21 2024 00:21:30.465 write: IOPS=543, BW=136MiB/s (143MB/s)(1382MiB/10170msec); 0 zone resets 00:21:30.465 slat (usec): min=18, max=127016, avg=1165.16, stdev=5250.01 00:21:30.465 clat (usec): min=1793, max=1396.2k, avg=116513.60, stdev=130967.61 00:21:30.465 lat (usec): min=1836, max=1396.3k, avg=117678.76, stdev=132298.08 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 12], 5.00th=[ 23], 10.00th=[ 35], 20.00th=[ 56], 00:21:30.465 | 30.00th=[ 70], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 103], 00:21:30.465 | 70.00th=[ 123], 80.00th=[ 153], 90.00th=[ 199], 95.00th=[ 236], 00:21:30.465 | 99.00th=[ 793], 99.50th=[ 1133], 99.90th=[ 1334], 99.95th=[ 1351], 00:21:30.465 | 99.99th=[ 1401] 00:21:30.465 bw ( KiB/s): min=13824, max=273920, per=10.85%, avg=139904.00, stdev=74287.63, samples=20 00:21:30.465 iops : min= 54, max= 1070, avg=546.50, stdev=290.19, samples=20 00:21:30.465 lat (msec) : 2=0.04%, 4=0.11%, 10=0.63%, 20=3.17%, 50=11.16% 00:21:30.465 lat (msec) : 100=44.20%, 250=36.70%, 500=2.26%, 750=0.67%, 1000=0.25% 00:21:30.465 lat (msec) : 2000=0.81% 00:21:30.465 cpu : usr=1.15%, sys=1.56%, ctx=3223, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,5529,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job9: (groupid=0, jobs=1): err= 0: pid=3317149: Tue Jul 23 14:03:21 2024 00:21:30.465 write: IOPS=455, BW=114MiB/s (119MB/s)(1157MiB/10159msec); 0 zone resets 00:21:30.465 slat (usec): min=21, max=101448, avg=1783.04, stdev=4979.74 00:21:30.465 clat (msec): min=3, max=329, avg=138.70, stdev=67.93 00:21:30.465 lat (msec): min=3, max=329, avg=140.49, stdev=68.90 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 17], 5.00th=[ 26], 10.00th=[ 47], 20.00th=[ 65], 00:21:30.465 | 30.00th=[ 101], 40.00th=[ 125], 50.00th=[ 142], 60.00th=[ 161], 00:21:30.465 | 70.00th=[ 186], 80.00th=[ 203], 90.00th=[ 220], 95.00th=[ 243], 00:21:30.465 | 99.00th=[ 292], 99.50th=[ 309], 99.90th=[ 326], 99.95th=[ 330], 00:21:30.465 | 99.99th=[ 330] 00:21:30.465 bw ( KiB/s): min=65536, max=285184, per=9.06%, avg=116812.80, stdev=50364.90, samples=20 00:21:30.465 iops : min= 256, max= 1114, avg=456.30, stdev=196.74, samples=20 00:21:30.465 lat (msec) : 4=0.02%, 10=0.13%, 20=3.03%, 50=10.96%, 100=15.85% 00:21:30.465 lat (msec) : 250=66.21%, 500=3.80% 00:21:30.465 cpu : usr=1.05%, sys=1.33%, ctx=2271, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,4626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 job10: (groupid=0, jobs=1): err= 0: pid=3317150: Tue Jul 23 14:03:21 2024 00:21:30.465 write: IOPS=444, BW=111MiB/s (116MB/s)(1141MiB/10272msec); 0 zone resets 00:21:30.465 slat (usec): min=21, max=75357, avg=1695.22, stdev=4781.73 00:21:30.465 clat (msec): min=4, max=611, avg=141.92, stdev=74.42 00:21:30.465 lat (msec): min=4, max=611, avg=143.61, stdev=75.38 00:21:30.465 clat percentiles (msec): 00:21:30.465 | 1.00th=[ 18], 5.00th=[ 41], 10.00th=[ 53], 20.00th=[ 81], 00:21:30.465 | 30.00th=[ 94], 40.00th=[ 116], 50.00th=[ 138], 60.00th=[ 157], 00:21:30.465 | 70.00th=[ 186], 80.00th=[ 201], 90.00th=[ 220], 95.00th=[ 245], 00:21:30.465 | 99.00th=[ 363], 99.50th=[ 485], 99.90th=[ 584], 99.95th=[ 584], 00:21:30.465 | 99.99th=[ 609] 00:21:30.465 bw ( KiB/s): min=67584, max=180736, per=8.93%, avg=115174.40, stdev=37532.55, samples=20 00:21:30.465 iops : min= 264, max= 706, avg=449.90, stdev=146.61, samples=20 00:21:30.465 lat (msec) : 10=0.20%, 20=1.07%, 50=7.69%, 100=25.27%, 250=61.08% 00:21:30.465 lat (msec) : 500=4.21%, 750=0.48% 00:21:30.465 cpu : usr=1.08%, sys=1.49%, ctx=2327, majf=0, minf=1 00:21:30.465 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:30.465 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:30.465 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:21:30.465 issued rwts: total=0,4563,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:30.465 latency : target=0, window=0, percentile=100.00%, depth=64 00:21:30.465 00:21:30.465 Run status group 0 (all jobs): 00:21:30.465 WRITE: bw=1260MiB/s (1321MB/s), 76.7MiB/s-167MiB/s (80.4MB/s-176MB/s), io=12.7GiB (13.7GB), run=10047-10335msec 00:21:30.465 00:21:30.465 Disk stats (read/write): 00:21:30.465 nvme0n1: ios=47/6237, merge=0/0, ticks=1808/1172857, in_queue=1174665, util=99.91% 00:21:30.465 nvme10n1: ios=50/8245, merge=0/0, ticks=113/1234724, in_queue=1234837, util=97.86% 00:21:30.465 nvme1n1: ios=48/8452, merge=0/0, ticks=1345/1224510, in_queue=1225855, util=99.98% 00:21:30.465 nvme2n1: ios=50/8995, merge=0/0, ticks=2386/1215698, in_queue=1218084, util=100.00% 00:21:30.465 nvme3n1: ios=51/9327, merge=0/0, ticks=2745/1204840, in_queue=1207585, util=100.00% 00:21:30.465 nvme4n1: ios=49/8200, merge=0/0, ticks=1579/1226182, in_queue=1227761, util=100.00% 00:21:30.465 nvme5n1: ios=47/13392, merge=0/0, ticks=1974/1210437, in_queue=1212411, util=100.00% 00:21:30.465 nvme6n1: ios=15/10772, merge=0/0, ticks=32/1222791, in_queue=1222823, util=98.43% 00:21:30.465 nvme7n1: ios=0/11055, merge=0/0, ticks=0/1254209, in_queue=1254209, util=98.80% 00:21:30.465 nvme8n1: ios=0/9092, merge=0/0, ticks=0/1213211, in_queue=1213211, util=98.93% 00:21:30.465 nvme9n1: ios=41/9077, merge=0/0, ticks=990/1221982, in_queue=1222972, util=100.00% 00:21:30.465 14:03:21 -- target/multiconnection.sh@36 -- # sync 00:21:30.465 14:03:21 -- target/multiconnection.sh@37 -- # seq 1 11 00:21:30.465 14:03:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.465 14:03:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:30.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:30.774 14:03:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:21:30.774 14:03:21 -- common/autotest_common.sh@1198 -- # local i=0 00:21:30.774 14:03:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:30.774 14:03:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:21:30.774 14:03:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:30.774 14:03:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:21:30.774 14:03:21 -- common/autotest_common.sh@1210 -- # return 0 00:21:30.774 14:03:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:30.774 14:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:30.774 14:03:21 -- common/autotest_common.sh@10 -- # set +x 00:21:30.774 14:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:30.774 14:03:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.774 14:03:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:21:31.043 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:21:31.043 14:03:21 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:21:31.043 14:03:21 -- common/autotest_common.sh@1198 -- # local i=0 00:21:31.043 14:03:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:31.043 14:03:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:21:31.043 14:03:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:31.043 14:03:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:21:31.044 14:03:21 -- common/autotest_common.sh@1210 -- # return 0 00:21:31.044 14:03:21 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:31.044 14:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:31.044 14:03:21 -- common/autotest_common.sh@10 -- # set +x 00:21:31.044 14:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:31.044 14:03:21 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.044 14:03:21 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:21:31.301 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:21:31.301 14:03:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:21:31.301 14:03:22 -- common/autotest_common.sh@1198 -- # local i=0 00:21:31.301 14:03:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:31.301 14:03:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:21:31.301 14:03:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:31.301 14:03:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:21:31.301 14:03:22 -- common/autotest_common.sh@1210 -- # return 0 00:21:31.301 14:03:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:31.301 14:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:31.301 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:21:31.301 14:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:31.301 14:03:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.301 14:03:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:21:31.559 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:21:31.559 14:03:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:21:31.559 14:03:22 -- common/autotest_common.sh@1198 -- # local i=0 00:21:31.559 14:03:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:31.559 14:03:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:21:31.559 14:03:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:31.559 14:03:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:21:31.559 14:03:22 -- common/autotest_common.sh@1210 -- # return 0 00:21:31.559 14:03:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:21:31.559 14:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:31.559 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:21:31.559 14:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:31.559 14:03:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.559 14:03:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:21:31.816 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:21:31.816 14:03:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:21:31.816 14:03:22 -- common/autotest_common.sh@1198 -- # local i=0 00:21:31.816 14:03:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:31.816 14:03:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:21:31.816 14:03:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:31.816 14:03:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:21:31.816 14:03:22 -- common/autotest_common.sh@1210 -- # return 0 00:21:31.816 14:03:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:21:31.816 14:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:31.816 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:21:31.816 14:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:31.817 14:03:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.817 14:03:22 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:21:32.074 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:21:32.074 14:03:22 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:21:32.074 14:03:22 -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.074 14:03:22 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:32.074 14:03:22 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:21:32.074 14:03:22 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:32.074 14:03:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:21:32.074 14:03:22 -- common/autotest_common.sh@1210 -- # return 0 00:21:32.074 14:03:22 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:21:32.074 14:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.074 14:03:22 -- common/autotest_common.sh@10 -- # set +x 00:21:32.074 14:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.074 14:03:22 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.074 14:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:21:32.332 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:21:32.332 14:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:21:32.332 14:03:23 -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.332 14:03:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:32.332 14:03:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:21:32.332 14:03:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:32.332 14:03:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:21:32.332 14:03:23 -- common/autotest_common.sh@1210 -- # return 0 00:21:32.332 14:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:21:32.332 14:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.332 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:21:32.332 14:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.332 14:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.332 14:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:21:32.332 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:21:32.332 14:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:21:32.332 14:03:23 -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.332 14:03:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:32.332 14:03:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:21:32.332 14:03:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:32.332 14:03:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:21:32.590 14:03:23 -- common/autotest_common.sh@1210 -- # return 0 00:21:32.590 14:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:21:32.590 14:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.590 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:21:32.590 14:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.590 14:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.590 14:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:21:32.590 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:21:32.590 14:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:21:32.590 14:03:23 -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.590 14:03:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:32.590 14:03:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:21:32.590 14:03:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:32.590 14:03:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:21:32.590 14:03:23 -- common/autotest_common.sh@1210 -- # return 0 00:21:32.590 14:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:21:32.590 14:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.590 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:21:32.590 14:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.590 14:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.590 14:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:21:32.849 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:21:32.849 14:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:21:32.849 14:03:23 -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.849 14:03:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:32.849 14:03:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:21:32.849 14:03:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:32.849 14:03:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:21:32.849 14:03:23 -- common/autotest_common.sh@1210 -- # return 0 00:21:32.849 14:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:21:32.849 14:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.849 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:21:32.849 14:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.849 14:03:23 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:32.849 14:03:23 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:21:32.849 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:21:32.849 14:03:23 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:21:32.849 14:03:23 -- common/autotest_common.sh@1198 -- # local i=0 00:21:32.849 14:03:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:21:32.849 14:03:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:21:32.849 14:03:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:21:32.849 14:03:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:21:32.849 14:03:23 -- common/autotest_common.sh@1210 -- # return 0 00:21:32.849 14:03:23 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:21:32.849 14:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:32.849 14:03:23 -- common/autotest_common.sh@10 -- # set +x 00:21:32.849 14:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:32.849 14:03:23 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:21:32.849 14:03:23 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:32.849 14:03:23 -- target/multiconnection.sh@47 -- # nvmftestfini 00:21:32.849 14:03:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:32.849 14:03:23 -- nvmf/common.sh@116 -- # sync 00:21:32.849 14:03:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:32.849 14:03:23 -- nvmf/common.sh@119 -- # set +e 00:21:32.849 14:03:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:32.849 14:03:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:32.849 rmmod nvme_tcp 00:21:32.849 rmmod nvme_fabrics 00:21:32.849 rmmod nvme_keyring 00:21:32.849 14:03:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:32.849 14:03:23 -- nvmf/common.sh@123 -- # set -e 00:21:32.849 14:03:23 -- nvmf/common.sh@124 -- # return 0 00:21:32.849 14:03:23 -- nvmf/common.sh@477 -- # '[' -n 3308256 ']' 00:21:32.849 14:03:23 -- nvmf/common.sh@478 -- # killprocess 3308256 00:21:32.849 14:03:23 -- common/autotest_common.sh@926 -- # '[' -z 3308256 ']' 00:21:32.849 14:03:23 -- common/autotest_common.sh@930 -- # kill -0 3308256 00:21:32.849 14:03:23 -- common/autotest_common.sh@931 -- # uname 00:21:32.849 14:03:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:32.849 14:03:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3308256 00:21:33.106 14:03:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:33.106 14:03:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:33.106 14:03:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3308256' 00:21:33.106 killing process with pid 3308256 00:21:33.106 14:03:23 -- common/autotest_common.sh@945 -- # kill 3308256 00:21:33.106 14:03:23 -- common/autotest_common.sh@950 -- # wait 3308256 00:21:33.364 14:03:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:33.364 14:03:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:33.364 14:03:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:33.364 14:03:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.364 14:03:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:33.364 14:03:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.364 14:03:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.364 14:03:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.895 14:03:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:35.895 00:21:35.895 real 1m10.740s 00:21:35.895 user 4m13.487s 00:21:35.895 sys 0m22.144s 00:21:35.896 14:03:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:35.896 14:03:26 -- common/autotest_common.sh@10 -- # set +x 00:21:35.896 ************************************ 00:21:35.896 END TEST nvmf_multiconnection 00:21:35.896 ************************************ 00:21:35.896 14:03:26 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:35.896 14:03:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:35.896 14:03:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:35.896 14:03:26 -- common/autotest_common.sh@10 -- # set +x 00:21:35.896 ************************************ 00:21:35.896 START TEST nvmf_initiator_timeout 00:21:35.896 ************************************ 00:21:35.896 14:03:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:21:35.896 * Looking for test storage... 00:21:35.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:35.896 14:03:26 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:35.896 14:03:26 -- nvmf/common.sh@7 -- # uname -s 00:21:35.896 14:03:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:35.896 14:03:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:35.896 14:03:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:35.896 14:03:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:35.896 14:03:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:35.896 14:03:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:35.896 14:03:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:35.896 14:03:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:35.896 14:03:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:35.896 14:03:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:35.896 14:03:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.896 14:03:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:35.896 14:03:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:35.896 14:03:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:35.896 14:03:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:35.896 14:03:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:35.896 14:03:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:35.896 14:03:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:35.896 14:03:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:35.896 14:03:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.896 14:03:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.896 14:03:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.896 14:03:26 -- paths/export.sh@5 -- # export PATH 00:21:35.896 14:03:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:35.896 14:03:26 -- nvmf/common.sh@46 -- # : 0 00:21:35.896 14:03:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:35.896 14:03:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:35.896 14:03:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:35.896 14:03:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:35.896 14:03:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:35.896 14:03:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:35.896 14:03:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:35.896 14:03:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:35.896 14:03:26 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:35.896 14:03:26 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:35.896 14:03:26 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:21:35.896 14:03:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:35.896 14:03:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:35.896 14:03:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:35.896 14:03:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:35.896 14:03:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:35.896 14:03:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:35.896 14:03:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:35.896 14:03:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:35.896 14:03:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:35.896 14:03:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:35.896 14:03:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:35.896 14:03:26 -- common/autotest_common.sh@10 -- # set +x 00:21:41.166 14:03:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:41.166 14:03:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:41.166 14:03:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:41.166 14:03:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:41.166 14:03:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:41.166 14:03:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:41.166 14:03:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:41.166 14:03:31 -- nvmf/common.sh@294 -- # net_devs=() 00:21:41.166 14:03:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:41.166 14:03:31 -- nvmf/common.sh@295 -- # e810=() 00:21:41.166 14:03:31 -- nvmf/common.sh@295 -- # local -ga e810 00:21:41.166 14:03:31 -- nvmf/common.sh@296 -- # x722=() 00:21:41.166 14:03:31 -- nvmf/common.sh@296 -- # local -ga x722 00:21:41.166 14:03:31 -- nvmf/common.sh@297 -- # mlx=() 00:21:41.166 14:03:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:41.166 14:03:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.166 14:03:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:41.166 14:03:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:41.166 14:03:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:41.166 14:03:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:41.166 14:03:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.166 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.166 14:03:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:41.166 14:03:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.166 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.166 14:03:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:41.166 14:03:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:41.166 14:03:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.166 14:03:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:41.166 14:03:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.166 14:03:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.166 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.166 14:03:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.166 14:03:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:41.166 14:03:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.166 14:03:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:41.166 14:03:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.166 14:03:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.166 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.166 14:03:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.166 14:03:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:41.166 14:03:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:41.166 14:03:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:41.166 14:03:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:41.166 14:03:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:41.166 14:03:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.166 14:03:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.166 14:03:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:41.166 14:03:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:41.166 14:03:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:41.166 14:03:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:41.166 14:03:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:41.166 14:03:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.166 14:03:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:41.166 14:03:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:41.166 14:03:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:41.166 14:03:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:41.166 14:03:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:41.166 14:03:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:41.166 14:03:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:41.166 14:03:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:41.166 14:03:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:41.166 14:03:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:41.166 14:03:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:41.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:21:41.166 00:21:41.166 --- 10.0.0.2 ping statistics --- 00:21:41.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.166 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:21:41.167 14:03:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:41.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:21:41.167 00:21:41.167 --- 10.0.0.1 ping statistics --- 00:21:41.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.167 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:21:41.167 14:03:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.167 14:03:31 -- nvmf/common.sh@410 -- # return 0 00:21:41.167 14:03:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:41.167 14:03:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.167 14:03:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:41.167 14:03:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:41.167 14:03:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.167 14:03:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:41.167 14:03:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:41.167 14:03:31 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:21:41.167 14:03:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:41.167 14:03:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:41.167 14:03:31 -- common/autotest_common.sh@10 -- # set +x 00:21:41.167 14:03:31 -- nvmf/common.sh@469 -- # nvmfpid=3322400 00:21:41.167 14:03:31 -- nvmf/common.sh@470 -- # waitforlisten 3322400 00:21:41.167 14:03:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:41.167 14:03:31 -- common/autotest_common.sh@819 -- # '[' -z 3322400 ']' 00:21:41.167 14:03:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.167 14:03:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:41.167 14:03:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.167 14:03:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:41.167 14:03:31 -- common/autotest_common.sh@10 -- # set +x 00:21:41.167 [2024-07-23 14:03:31.873007] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:21:41.167 [2024-07-23 14:03:31.873053] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.167 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.167 [2024-07-23 14:03:31.930201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.167 [2024-07-23 14:03:32.008124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:41.167 [2024-07-23 14:03:32.008227] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.167 [2024-07-23 14:03:32.008234] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.167 [2024-07-23 14:03:32.008242] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.167 [2024-07-23 14:03:32.008282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.167 [2024-07-23 14:03:32.008381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.167 [2024-07-23 14:03:32.008466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.167 [2024-07-23 14:03:32.008467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.733 14:03:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:41.733 14:03:32 -- common/autotest_common.sh@852 -- # return 0 00:21:41.733 14:03:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:41.733 14:03:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:41.733 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.733 14:03:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.733 14:03:32 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:41.733 14:03:32 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:41.733 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.733 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.733 Malloc0 00:21:41.733 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.733 14:03:32 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:21:41.733 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.733 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.733 Delay0 00:21:41.733 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.733 14:03:32 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.733 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.733 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.991 [2024-07-23 14:03:32.755504] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.991 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.991 14:03:32 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:41.991 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.991 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.991 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.991 14:03:32 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:41.991 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.991 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.991 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.991 14:03:32 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:41.991 14:03:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:41.991 14:03:32 -- common/autotest_common.sh@10 -- # set +x 00:21:41.991 [2024-07-23 14:03:32.784427] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.991 14:03:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:41.991 14:03:32 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:21:43.365 14:03:33 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:21:43.365 14:03:33 -- common/autotest_common.sh@1177 -- # local i=0 00:21:43.365 14:03:33 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:21:43.365 14:03:33 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:21:43.365 14:03:33 -- common/autotest_common.sh@1184 -- # sleep 2 00:21:45.262 14:03:35 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:21:45.262 14:03:35 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:21:45.262 14:03:35 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:21:45.262 14:03:35 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:21:45.262 14:03:35 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.262 14:03:35 -- common/autotest_common.sh@1187 -- # return 0 00:21:45.262 14:03:35 -- target/initiator_timeout.sh@35 -- # fio_pid=3323130 00:21:45.262 14:03:35 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:21:45.262 14:03:35 -- target/initiator_timeout.sh@37 -- # sleep 3 00:21:45.262 [global] 00:21:45.262 thread=1 00:21:45.262 invalidate=1 00:21:45.262 rw=write 00:21:45.262 time_based=1 00:21:45.262 runtime=60 00:21:45.262 ioengine=libaio 00:21:45.262 direct=1 00:21:45.262 bs=4096 00:21:45.262 iodepth=1 00:21:45.262 norandommap=0 00:21:45.262 numjobs=1 00:21:45.262 00:21:45.262 verify_dump=1 00:21:45.262 verify_backlog=512 00:21:45.262 verify_state_save=0 00:21:45.262 do_verify=1 00:21:45.262 verify=crc32c-intel 00:21:45.262 [job0] 00:21:45.262 filename=/dev/nvme0n1 00:21:45.262 Could not set queue depth (nvme0n1) 00:21:45.262 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:21:45.262 fio-3.35 00:21:45.262 Starting 1 thread 00:21:48.546 14:03:38 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:21:48.546 14:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.546 14:03:38 -- common/autotest_common.sh@10 -- # set +x 00:21:48.546 true 00:21:48.546 14:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.546 14:03:38 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:21:48.546 14:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.546 14:03:38 -- common/autotest_common.sh@10 -- # set +x 00:21:48.546 true 00:21:48.546 14:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.546 14:03:39 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:21:48.546 14:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.546 14:03:39 -- common/autotest_common.sh@10 -- # set +x 00:21:48.546 true 00:21:48.546 14:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.546 14:03:39 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:21:48.546 14:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.546 14:03:39 -- common/autotest_common.sh@10 -- # set +x 00:21:48.546 true 00:21:48.546 14:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.546 14:03:39 -- target/initiator_timeout.sh@45 -- # sleep 3 00:21:51.078 14:03:42 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:21:51.078 14:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.078 14:03:42 -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 true 00:21:51.078 14:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.078 14:03:42 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:21:51.078 14:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.078 14:03:42 -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 true 00:21:51.078 14:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.078 14:03:42 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:21:51.078 14:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.078 14:03:42 -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 true 00:21:51.078 14:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.078 14:03:42 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:21:51.078 14:03:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.078 14:03:42 -- common/autotest_common.sh@10 -- # set +x 00:21:51.078 true 00:21:51.078 14:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.078 14:03:42 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:21:51.078 14:03:42 -- target/initiator_timeout.sh@54 -- # wait 3323130 00:22:47.344 00:22:47.344 job0: (groupid=0, jobs=1): err= 0: pid=3323250: Tue Jul 23 14:04:36 2024 00:22:47.344 read: IOPS=7, BW=30.2KiB/s (30.9kB/s)(1812KiB/60014msec) 00:22:47.344 slat (usec): min=8, max=2939, avg=26.44, stdev=137.30 00:22:47.344 clat (usec): min=863, max=41292k, avg=132100.16, stdev=1938154.91 00:22:47.344 lat (usec): min=874, max=41292k, avg=132126.60, stdev=1938154.74 00:22:47.344 clat percentiles (usec): 00:22:47.344 | 1.00th=[ 1074], 5.00th=[ 41681], 10.00th=[ 41681], 00:22:47.344 | 20.00th=[ 41681], 30.00th=[ 42206], 40.00th=[ 42206], 00:22:47.344 | 50.00th=[ 42206], 60.00th=[ 42206], 70.00th=[ 42206], 00:22:47.344 | 80.00th=[ 42206], 90.00th=[ 42206], 95.00th=[ 42730], 00:22:47.344 | 99.00th=[ 43254], 99.50th=[ 43254], 99.90th=[17112761], 00:22:47.344 | 99.95th=[17112761], 99.99th=[17112761] 00:22:47.344 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60014msec); 0 zone resets 00:22:47.344 slat (nsec): min=9178, max=40545, avg=10428.02, stdev=2453.96 00:22:47.344 clat (usec): min=219, max=1268, avg=294.34, stdev=90.76 00:22:47.344 lat (usec): min=229, max=1279, avg=304.77, stdev=91.78 00:22:47.344 clat percentiles (usec): 00:22:47.344 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 251], 00:22:47.344 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:22:47.344 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 400], 95.00th=[ 494], 00:22:47.344 | 99.00th=[ 515], 99.50th=[ 717], 99.90th=[ 1270], 99.95th=[ 1270], 00:22:47.344 | 99.99th=[ 1270] 00:22:47.344 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:22:47.344 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:22:47.344 lat (usec) : 250=10.36%, 500=41.04%, 750=1.45%, 1000=0.41% 00:22:47.344 lat (msec) : 2=0.93%, 50=45.70%, >=2000=0.10% 00:22:47.344 cpu : usr=0.02%, sys=0.02%, ctx=966, majf=0, minf=2 00:22:47.344 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:22:47.344 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.344 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:47.344 issued rwts: total=453,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:47.344 latency : target=0, window=0, percentile=100.00%, depth=1 00:22:47.344 00:22:47.344 Run status group 0 (all jobs): 00:22:47.344 READ: bw=30.2KiB/s (30.9kB/s), 30.2KiB/s-30.2KiB/s (30.9kB/s-30.9kB/s), io=1812KiB (1855kB), run=60014-60014msec 00:22:47.344 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60014-60014msec 00:22:47.344 00:22:47.344 Disk stats (read/write): 00:22:47.344 nvme0n1: ios=549/512, merge=0/0, ticks=20262/147, in_queue=20409, util=100.00% 00:22:47.344 14:04:36 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:47.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:47.344 14:04:36 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:22:47.344 14:04:36 -- common/autotest_common.sh@1198 -- # local i=0 00:22:47.344 14:04:36 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:47.344 14:04:36 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:47.344 14:04:36 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:47.344 14:04:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:22:47.344 14:04:36 -- common/autotest_common.sh@1210 -- # return 0 00:22:47.344 14:04:36 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:22:47.344 14:04:36 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:22:47.344 nvmf hotplug test: fio successful as expected 00:22:47.344 14:04:36 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:47.344 14:04:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.344 14:04:36 -- common/autotest_common.sh@10 -- # set +x 00:22:47.344 14:04:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.345 14:04:36 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:22:47.345 14:04:36 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:22:47.345 14:04:36 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:22:47.345 14:04:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:47.345 14:04:36 -- nvmf/common.sh@116 -- # sync 00:22:47.345 14:04:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:47.345 14:04:36 -- nvmf/common.sh@119 -- # set +e 00:22:47.345 14:04:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:47.345 14:04:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:47.345 rmmod nvme_tcp 00:22:47.345 rmmod nvme_fabrics 00:22:47.345 rmmod nvme_keyring 00:22:47.345 14:04:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:47.345 14:04:36 -- nvmf/common.sh@123 -- # set -e 00:22:47.345 14:04:36 -- nvmf/common.sh@124 -- # return 0 00:22:47.345 14:04:36 -- nvmf/common.sh@477 -- # '[' -n 3322400 ']' 00:22:47.345 14:04:36 -- nvmf/common.sh@478 -- # killprocess 3322400 00:22:47.345 14:04:36 -- common/autotest_common.sh@926 -- # '[' -z 3322400 ']' 00:22:47.345 14:04:36 -- common/autotest_common.sh@930 -- # kill -0 3322400 00:22:47.345 14:04:36 -- common/autotest_common.sh@931 -- # uname 00:22:47.345 14:04:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:47.345 14:04:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3322400 00:22:47.345 14:04:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:47.345 14:04:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:47.345 14:04:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3322400' 00:22:47.345 killing process with pid 3322400 00:22:47.345 14:04:36 -- common/autotest_common.sh@945 -- # kill 3322400 00:22:47.345 14:04:36 -- common/autotest_common.sh@950 -- # wait 3322400 00:22:47.345 14:04:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:47.345 14:04:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:47.345 14:04:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:47.345 14:04:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:47.345 14:04:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:47.345 14:04:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.345 14:04:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:47.345 14:04:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:48.284 14:04:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:48.284 00:22:48.284 real 1m12.585s 00:22:48.284 user 4m24.978s 00:22:48.284 sys 0m5.642s 00:22:48.284 14:04:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:48.284 14:04:39 -- common/autotest_common.sh@10 -- # set +x 00:22:48.284 ************************************ 00:22:48.284 END TEST nvmf_initiator_timeout 00:22:48.284 ************************************ 00:22:48.284 14:04:39 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:22:48.284 14:04:39 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:22:48.284 14:04:39 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:22:48.284 14:04:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:48.284 14:04:39 -- common/autotest_common.sh@10 -- # set +x 00:22:53.556 14:04:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:53.556 14:04:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:53.556 14:04:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:53.556 14:04:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:53.556 14:04:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:53.556 14:04:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:53.556 14:04:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:53.556 14:04:43 -- nvmf/common.sh@294 -- # net_devs=() 00:22:53.556 14:04:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:53.556 14:04:43 -- nvmf/common.sh@295 -- # e810=() 00:22:53.556 14:04:43 -- nvmf/common.sh@295 -- # local -ga e810 00:22:53.556 14:04:43 -- nvmf/common.sh@296 -- # x722=() 00:22:53.556 14:04:43 -- nvmf/common.sh@296 -- # local -ga x722 00:22:53.556 14:04:43 -- nvmf/common.sh@297 -- # mlx=() 00:22:53.556 14:04:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:53.556 14:04:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:53.556 14:04:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:53.556 14:04:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:53.556 14:04:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:53.556 14:04:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:53.556 14:04:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:53.556 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:53.556 14:04:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:53.556 14:04:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:53.556 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:53.556 14:04:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:53.556 14:04:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:53.556 14:04:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:53.556 14:04:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.556 14:04:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:53.556 14:04:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.556 14:04:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:53.556 Found net devices under 0000:86:00.0: cvl_0_0 00:22:53.556 14:04:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.556 14:04:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:53.556 14:04:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:53.556 14:04:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:53.556 14:04:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:53.556 14:04:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:53.556 Found net devices under 0000:86:00.1: cvl_0_1 00:22:53.556 14:04:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:53.556 14:04:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:53.556 14:04:43 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:53.556 14:04:43 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:22:53.556 14:04:43 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:53.556 14:04:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:53.556 14:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:53.556 14:04:43 -- common/autotest_common.sh@10 -- # set +x 00:22:53.556 ************************************ 00:22:53.556 START TEST nvmf_perf_adq 00:22:53.556 ************************************ 00:22:53.557 14:04:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:53.557 * Looking for test storage... 00:22:53.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.557 14:04:43 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.557 14:04:43 -- nvmf/common.sh@7 -- # uname -s 00:22:53.557 14:04:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.557 14:04:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.557 14:04:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.557 14:04:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.557 14:04:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.557 14:04:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.557 14:04:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.557 14:04:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.557 14:04:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.557 14:04:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.557 14:04:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.557 14:04:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:53.557 14:04:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.557 14:04:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.557 14:04:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.557 14:04:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.557 14:04:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.557 14:04:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.557 14:04:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.557 14:04:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.557 14:04:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.557 14:04:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.557 14:04:43 -- paths/export.sh@5 -- # export PATH 00:22:53.557 14:04:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.557 14:04:43 -- nvmf/common.sh@46 -- # : 0 00:22:53.557 14:04:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:53.557 14:04:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:53.557 14:04:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:53.557 14:04:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.557 14:04:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.557 14:04:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:53.557 14:04:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:53.557 14:04:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:53.557 14:04:43 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:53.557 14:04:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:53.557 14:04:43 -- common/autotest_common.sh@10 -- # set +x 00:22:58.829 14:04:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:58.829 14:04:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:58.829 14:04:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:58.829 14:04:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:58.829 14:04:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:58.829 14:04:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:58.829 14:04:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:58.829 14:04:48 -- nvmf/common.sh@294 -- # net_devs=() 00:22:58.829 14:04:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:58.829 14:04:48 -- nvmf/common.sh@295 -- # e810=() 00:22:58.829 14:04:48 -- nvmf/common.sh@295 -- # local -ga e810 00:22:58.829 14:04:48 -- nvmf/common.sh@296 -- # x722=() 00:22:58.829 14:04:48 -- nvmf/common.sh@296 -- # local -ga x722 00:22:58.829 14:04:48 -- nvmf/common.sh@297 -- # mlx=() 00:22:58.829 14:04:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:58.829 14:04:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.829 14:04:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:58.829 14:04:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:58.829 14:04:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:58.829 14:04:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:58.829 14:04:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:58.829 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:58.829 14:04:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:58.829 14:04:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:58.829 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:58.829 14:04:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:58.829 14:04:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:58.829 14:04:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:58.829 14:04:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.829 14:04:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:58.829 14:04:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.829 14:04:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:58.829 Found net devices under 0000:86:00.0: cvl_0_0 00:22:58.829 14:04:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.829 14:04:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:58.829 14:04:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.829 14:04:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:58.829 14:04:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.829 14:04:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:58.829 Found net devices under 0000:86:00.1: cvl_0_1 00:22:58.829 14:04:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.829 14:04:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:58.829 14:04:48 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.829 14:04:48 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:58.829 14:04:48 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:58.829 14:04:48 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:22:58.829 14:04:48 -- target/perf_adq.sh@52 -- # rmmod ice 00:22:59.087 14:04:49 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:01.129 14:04:51 -- target/perf_adq.sh@54 -- # sleep 5 00:23:06.410 14:04:56 -- target/perf_adq.sh@67 -- # nvmftestinit 00:23:06.410 14:04:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:06.410 14:04:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.410 14:04:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:06.410 14:04:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:06.410 14:04:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:06.410 14:04:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.410 14:04:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.410 14:04:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.410 14:04:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:06.410 14:04:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:06.410 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:23:06.410 14:04:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:06.410 14:04:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:06.410 14:04:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:06.410 14:04:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:06.410 14:04:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:06.410 14:04:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:06.410 14:04:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:06.410 14:04:56 -- nvmf/common.sh@294 -- # net_devs=() 00:23:06.410 14:04:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:06.410 14:04:56 -- nvmf/common.sh@295 -- # e810=() 00:23:06.410 14:04:56 -- nvmf/common.sh@295 -- # local -ga e810 00:23:06.410 14:04:56 -- nvmf/common.sh@296 -- # x722=() 00:23:06.410 14:04:56 -- nvmf/common.sh@296 -- # local -ga x722 00:23:06.410 14:04:56 -- nvmf/common.sh@297 -- # mlx=() 00:23:06.410 14:04:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:06.410 14:04:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.410 14:04:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:06.410 14:04:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:06.410 14:04:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:06.410 14:04:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:06.410 14:04:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:06.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:06.410 14:04:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:06.410 14:04:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:06.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:06.410 14:04:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:06.410 14:04:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:06.410 14:04:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:06.410 14:04:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.410 14:04:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:06.411 14:04:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.411 14:04:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:06.411 Found net devices under 0000:86:00.0: cvl_0_0 00:23:06.411 14:04:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.411 14:04:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:06.411 14:04:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.411 14:04:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:06.411 14:04:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.411 14:04:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:06.411 Found net devices under 0000:86:00.1: cvl_0_1 00:23:06.411 14:04:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.411 14:04:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:06.411 14:04:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:06.411 14:04:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:06.411 14:04:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:06.411 14:04:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:06.411 14:04:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.411 14:04:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.411 14:04:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.411 14:04:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:06.411 14:04:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.411 14:04:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.411 14:04:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:06.411 14:04:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.411 14:04:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.411 14:04:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:06.411 14:04:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:06.411 14:04:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.411 14:04:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.411 14:04:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.411 14:04:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.411 14:04:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:06.411 14:04:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.411 14:04:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.411 14:04:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.411 14:04:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:06.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:23:06.411 00:23:06.411 --- 10.0.0.2 ping statistics --- 00:23:06.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.411 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:23:06.411 14:04:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.305 ms 00:23:06.411 00:23:06.411 --- 10.0.0.1 ping statistics --- 00:23:06.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.411 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:23:06.411 14:04:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.411 14:04:56 -- nvmf/common.sh@410 -- # return 0 00:23:06.411 14:04:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:06.411 14:04:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.411 14:04:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:06.411 14:04:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:06.411 14:04:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.411 14:04:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:06.411 14:04:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:06.411 14:04:56 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:06.411 14:04:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:06.411 14:04:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:06.411 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:23:06.411 14:04:56 -- nvmf/common.sh@469 -- # nvmfpid=3340861 00:23:06.411 14:04:56 -- nvmf/common.sh@470 -- # waitforlisten 3340861 00:23:06.411 14:04:56 -- common/autotest_common.sh@819 -- # '[' -z 3340861 ']' 00:23:06.411 14:04:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.411 14:04:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:06.411 14:04:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.411 14:04:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:06.411 14:04:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:06.411 14:04:56 -- common/autotest_common.sh@10 -- # set +x 00:23:06.411 [2024-07-23 14:04:57.005803] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:06.411 [2024-07-23 14:04:57.005845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.411 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.411 [2024-07-23 14:04:57.062716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.411 [2024-07-23 14:04:57.141496] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:06.411 [2024-07-23 14:04:57.141606] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.411 [2024-07-23 14:04:57.141615] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.411 [2024-07-23 14:04:57.141622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.411 [2024-07-23 14:04:57.141658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.411 [2024-07-23 14:04:57.141679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.411 [2024-07-23 14:04:57.141766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.411 [2024-07-23 14:04:57.141768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:06.981 14:04:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:06.981 14:04:57 -- common/autotest_common.sh@852 -- # return 0 00:23:06.981 14:04:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:06.981 14:04:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:06.981 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:06.981 14:04:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.981 14:04:57 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:23:06.981 14:04:57 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:06.981 14:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.981 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:06.981 14:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.981 14:04:57 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:06.981 14:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.981 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:06.981 14:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.981 14:04:57 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:06.981 14:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.981 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:06.981 [2024-07-23 14:04:57.958177] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.981 14:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.981 14:04:57 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:06.981 14:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.981 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:06.981 Malloc1 00:23:06.981 14:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:06.981 14:04:57 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.981 14:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:06.981 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 14:04:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.240 14:04:57 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:07.240 14:04:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.240 14:04:57 -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 14:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.240 14:04:58 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:07.240 14:04:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:07.240 14:04:58 -- common/autotest_common.sh@10 -- # set +x 00:23:07.240 [2024-07-23 14:04:58.009942] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.240 14:04:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:07.240 14:04:58 -- target/perf_adq.sh@73 -- # perfpid=3341071 00:23:07.240 14:04:58 -- target/perf_adq.sh@74 -- # sleep 2 00:23:07.240 14:04:58 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:07.240 EAL: No free 2048 kB hugepages reported on node 1 00:23:09.148 14:05:00 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:23:09.148 14:05:00 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:09.148 14:05:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:09.148 14:05:00 -- target/perf_adq.sh@76 -- # wc -l 00:23:09.148 14:05:00 -- common/autotest_common.sh@10 -- # set +x 00:23:09.148 14:05:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:09.148 14:05:00 -- target/perf_adq.sh@76 -- # count=4 00:23:09.148 14:05:00 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:23:09.148 14:05:00 -- target/perf_adq.sh@81 -- # wait 3341071 00:23:17.273 Initializing NVMe Controllers 00:23:17.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:17.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:17.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:17.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:17.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:17.273 Initialization complete. Launching workers. 00:23:17.273 ======================================================== 00:23:17.273 Latency(us) 00:23:17.273 Device Information : IOPS MiB/s Average min max 00:23:17.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10583.09 41.34 6048.15 2028.59 10545.56 00:23:17.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10696.59 41.78 5982.92 2031.87 11780.59 00:23:17.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10664.29 41.66 6000.90 1421.13 12178.93 00:23:17.273 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10535.09 41.15 6075.56 1423.40 12256.19 00:23:17.273 ======================================================== 00:23:17.273 Total : 42479.06 165.93 6026.66 1421.13 12256.19 00:23:17.273 00:23:17.273 14:05:08 -- target/perf_adq.sh@82 -- # nvmftestfini 00:23:17.273 14:05:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:17.273 14:05:08 -- nvmf/common.sh@116 -- # sync 00:23:17.273 14:05:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:17.273 14:05:08 -- nvmf/common.sh@119 -- # set +e 00:23:17.273 14:05:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:17.273 14:05:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:17.273 rmmod nvme_tcp 00:23:17.273 rmmod nvme_fabrics 00:23:17.273 rmmod nvme_keyring 00:23:17.273 14:05:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:17.273 14:05:08 -- nvmf/common.sh@123 -- # set -e 00:23:17.273 14:05:08 -- nvmf/common.sh@124 -- # return 0 00:23:17.273 14:05:08 -- nvmf/common.sh@477 -- # '[' -n 3340861 ']' 00:23:17.273 14:05:08 -- nvmf/common.sh@478 -- # killprocess 3340861 00:23:17.273 14:05:08 -- common/autotest_common.sh@926 -- # '[' -z 3340861 ']' 00:23:17.273 14:05:08 -- common/autotest_common.sh@930 -- # kill -0 3340861 00:23:17.273 14:05:08 -- common/autotest_common.sh@931 -- # uname 00:23:17.273 14:05:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:17.273 14:05:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3340861 00:23:17.273 14:05:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:17.273 14:05:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:17.273 14:05:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3340861' 00:23:17.273 killing process with pid 3340861 00:23:17.273 14:05:08 -- common/autotest_common.sh@945 -- # kill 3340861 00:23:17.273 14:05:08 -- common/autotest_common.sh@950 -- # wait 3340861 00:23:17.533 14:05:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:17.533 14:05:08 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:17.533 14:05:08 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:17.533 14:05:08 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.533 14:05:08 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:17.533 14:05:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.533 14:05:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.533 14:05:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.067 14:05:10 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:20.067 14:05:10 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:23:20.067 14:05:10 -- target/perf_adq.sh@52 -- # rmmod ice 00:23:21.004 14:05:11 -- target/perf_adq.sh@53 -- # modprobe ice 00:23:22.382 14:05:13 -- target/perf_adq.sh@54 -- # sleep 5 00:23:27.662 14:05:18 -- target/perf_adq.sh@87 -- # nvmftestinit 00:23:27.662 14:05:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:27.662 14:05:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.662 14:05:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:27.662 14:05:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:27.662 14:05:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:27.662 14:05:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.662 14:05:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.662 14:05:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.662 14:05:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:27.662 14:05:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:27.662 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:23:27.662 14:05:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:27.662 14:05:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:27.662 14:05:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:27.662 14:05:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:27.662 14:05:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:27.662 14:05:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:27.662 14:05:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:27.662 14:05:18 -- nvmf/common.sh@294 -- # net_devs=() 00:23:27.662 14:05:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:27.662 14:05:18 -- nvmf/common.sh@295 -- # e810=() 00:23:27.662 14:05:18 -- nvmf/common.sh@295 -- # local -ga e810 00:23:27.662 14:05:18 -- nvmf/common.sh@296 -- # x722=() 00:23:27.662 14:05:18 -- nvmf/common.sh@296 -- # local -ga x722 00:23:27.662 14:05:18 -- nvmf/common.sh@297 -- # mlx=() 00:23:27.662 14:05:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:27.662 14:05:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.662 14:05:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:27.662 14:05:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:27.662 14:05:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:27.662 14:05:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:27.662 14:05:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:27.662 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:27.662 14:05:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:27.662 14:05:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:27.662 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:27.662 14:05:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:27.662 14:05:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:27.662 14:05:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.662 14:05:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:27.662 14:05:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.662 14:05:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:27.662 Found net devices under 0000:86:00.0: cvl_0_0 00:23:27.662 14:05:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.662 14:05:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:27.662 14:05:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.662 14:05:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:27.662 14:05:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.662 14:05:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:27.662 Found net devices under 0000:86:00.1: cvl_0_1 00:23:27.662 14:05:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.662 14:05:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:27.662 14:05:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:27.662 14:05:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:27.662 14:05:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:27.662 14:05:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.662 14:05:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.662 14:05:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.662 14:05:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:27.662 14:05:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.662 14:05:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.662 14:05:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:27.662 14:05:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.662 14:05:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.662 14:05:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:27.662 14:05:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:27.662 14:05:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.662 14:05:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.662 14:05:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.662 14:05:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.662 14:05:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:27.662 14:05:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.662 14:05:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.662 14:05:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.662 14:05:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:27.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:23:27.662 00:23:27.662 --- 10.0.0.2 ping statistics --- 00:23:27.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.662 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:23:27.922 14:05:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:23:27.922 00:23:27.922 --- 10.0.0.1 ping statistics --- 00:23:27.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.922 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:23:27.922 14:05:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.922 14:05:18 -- nvmf/common.sh@410 -- # return 0 00:23:27.922 14:05:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:27.922 14:05:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.922 14:05:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:27.922 14:05:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:27.922 14:05:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.922 14:05:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:27.922 14:05:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:27.922 14:05:18 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:23:27.922 14:05:18 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:27.923 14:05:18 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:27.923 14:05:18 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:27.923 net.core.busy_poll = 1 00:23:27.923 14:05:18 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:27.923 net.core.busy_read = 1 00:23:27.923 14:05:18 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:27.923 14:05:18 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:27.923 14:05:18 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:27.923 14:05:18 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:27.923 14:05:18 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:27.923 14:05:18 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:27.923 14:05:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:27.923 14:05:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:27.923 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:23:27.923 14:05:18 -- nvmf/common.sh@469 -- # nvmfpid=3344912 00:23:27.923 14:05:18 -- nvmf/common.sh@470 -- # waitforlisten 3344912 00:23:27.923 14:05:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:27.923 14:05:18 -- common/autotest_common.sh@819 -- # '[' -z 3344912 ']' 00:23:27.923 14:05:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.923 14:05:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:27.923 14:05:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.923 14:05:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:27.923 14:05:18 -- common/autotest_common.sh@10 -- # set +x 00:23:28.183 [2024-07-23 14:05:18.990350] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:28.183 [2024-07-23 14:05:18.990401] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.183 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.183 [2024-07-23 14:05:19.048855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.183 [2024-07-23 14:05:19.129167] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:28.183 [2024-07-23 14:05:19.129276] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.183 [2024-07-23 14:05:19.129284] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.183 [2024-07-23 14:05:19.129290] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.183 [2024-07-23 14:05:19.129322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.183 [2024-07-23 14:05:19.129417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.183 [2024-07-23 14:05:19.129483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.183 [2024-07-23 14:05:19.129484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:29.119 14:05:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:29.120 14:05:19 -- common/autotest_common.sh@852 -- # return 0 00:23:29.120 14:05:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:29.120 14:05:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 14:05:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.120 14:05:19 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:23:29.120 14:05:19 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 [2024-07-23 14:05:19.927747] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 Malloc1 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:29.120 14:05:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:29.120 14:05:19 -- common/autotest_common.sh@10 -- # set +x 00:23:29.120 [2024-07-23 14:05:19.971200] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.120 14:05:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:29.120 14:05:19 -- target/perf_adq.sh@94 -- # perfpid=3344961 00:23:29.120 14:05:19 -- target/perf_adq.sh@95 -- # sleep 2 00:23:29.120 14:05:19 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:29.120 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.017 14:05:21 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:23:31.017 14:05:21 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:31.017 14:05:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:31.017 14:05:21 -- target/perf_adq.sh@97 -- # wc -l 00:23:31.017 14:05:21 -- common/autotest_common.sh@10 -- # set +x 00:23:31.017 14:05:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:31.017 14:05:22 -- target/perf_adq.sh@97 -- # count=2 00:23:31.017 14:05:22 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:23:31.017 14:05:22 -- target/perf_adq.sh@103 -- # wait 3344961 00:23:40.975 Initializing NVMe Controllers 00:23:40.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:40.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:40.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:40.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:40.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:40.975 Initialization complete. Launching workers. 00:23:40.975 ======================================================== 00:23:40.975 Latency(us) 00:23:40.975 Device Information : IOPS MiB/s Average min max 00:23:40.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5686.40 22.21 11292.54 2001.59 56697.73 00:23:40.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6062.70 23.68 10579.75 1741.91 56563.71 00:23:40.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11855.20 46.31 5416.66 1460.62 47163.13 00:23:40.975 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6549.70 25.58 9797.94 1874.63 55433.91 00:23:40.975 ======================================================== 00:23:40.975 Total : 30153.99 117.79 8514.46 1460.62 56697.73 00:23:40.975 00:23:40.975 14:05:30 -- target/perf_adq.sh@104 -- # nvmftestfini 00:23:40.975 14:05:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:40.975 14:05:30 -- nvmf/common.sh@116 -- # sync 00:23:40.975 14:05:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:40.975 14:05:30 -- nvmf/common.sh@119 -- # set +e 00:23:40.975 14:05:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:40.975 14:05:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:40.975 rmmod nvme_tcp 00:23:40.975 rmmod nvme_fabrics 00:23:40.975 rmmod nvme_keyring 00:23:40.975 14:05:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:40.975 14:05:30 -- nvmf/common.sh@123 -- # set -e 00:23:40.975 14:05:30 -- nvmf/common.sh@124 -- # return 0 00:23:40.975 14:05:30 -- nvmf/common.sh@477 -- # '[' -n 3344912 ']' 00:23:40.975 14:05:30 -- nvmf/common.sh@478 -- # killprocess 3344912 00:23:40.975 14:05:30 -- common/autotest_common.sh@926 -- # '[' -z 3344912 ']' 00:23:40.975 14:05:30 -- common/autotest_common.sh@930 -- # kill -0 3344912 00:23:40.975 14:05:30 -- common/autotest_common.sh@931 -- # uname 00:23:40.975 14:05:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:40.975 14:05:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3344912 00:23:40.975 14:05:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:23:40.975 14:05:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:23:40.975 14:05:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3344912' 00:23:40.975 killing process with pid 3344912 00:23:40.975 14:05:30 -- common/autotest_common.sh@945 -- # kill 3344912 00:23:40.975 14:05:30 -- common/autotest_common.sh@950 -- # wait 3344912 00:23:40.975 14:05:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:40.975 14:05:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:40.975 14:05:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:40.975 14:05:30 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.975 14:05:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:40.975 14:05:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.975 14:05:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.975 14:05:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.913 14:05:32 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:41.913 14:05:32 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:23:41.913 00:23:41.913 real 0m48.859s 00:23:41.913 user 2m48.266s 00:23:41.913 sys 0m9.919s 00:23:41.913 14:05:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:41.913 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:23:41.913 ************************************ 00:23:41.913 END TEST nvmf_perf_adq 00:23:41.913 ************************************ 00:23:41.913 14:05:32 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:41.913 14:05:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:41.913 14:05:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:41.913 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:23:41.913 ************************************ 00:23:41.913 START TEST nvmf_shutdown 00:23:41.913 ************************************ 00:23:41.913 14:05:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:41.913 * Looking for test storage... 00:23:41.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:41.913 14:05:32 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.913 14:05:32 -- nvmf/common.sh@7 -- # uname -s 00:23:41.913 14:05:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.913 14:05:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.913 14:05:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.913 14:05:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.913 14:05:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.913 14:05:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.913 14:05:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.913 14:05:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.913 14:05:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.913 14:05:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.913 14:05:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:41.913 14:05:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:41.913 14:05:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.913 14:05:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.913 14:05:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.913 14:05:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.913 14:05:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.913 14:05:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.913 14:05:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.913 14:05:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.913 14:05:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.913 14:05:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.913 14:05:32 -- paths/export.sh@5 -- # export PATH 00:23:41.913 14:05:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.913 14:05:32 -- nvmf/common.sh@46 -- # : 0 00:23:41.913 14:05:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:41.913 14:05:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:41.913 14:05:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:41.913 14:05:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.914 14:05:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.914 14:05:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:41.914 14:05:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:41.914 14:05:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:41.914 14:05:32 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.914 14:05:32 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.914 14:05:32 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:41.914 14:05:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:41.914 14:05:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:41.914 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:23:41.914 ************************************ 00:23:41.914 START TEST nvmf_shutdown_tc1 00:23:41.914 ************************************ 00:23:41.914 14:05:32 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:23:41.914 14:05:32 -- target/shutdown.sh@74 -- # starttarget 00:23:41.914 14:05:32 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:41.914 14:05:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:41.914 14:05:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.914 14:05:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:41.914 14:05:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:41.914 14:05:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:41.914 14:05:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.914 14:05:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.914 14:05:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.914 14:05:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:41.914 14:05:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:41.914 14:05:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:41.914 14:05:32 -- common/autotest_common.sh@10 -- # set +x 00:23:47.249 14:05:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:47.249 14:05:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:47.249 14:05:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:47.249 14:05:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:47.249 14:05:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:47.249 14:05:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:47.249 14:05:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:47.249 14:05:37 -- nvmf/common.sh@294 -- # net_devs=() 00:23:47.249 14:05:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:47.249 14:05:37 -- nvmf/common.sh@295 -- # e810=() 00:23:47.249 14:05:37 -- nvmf/common.sh@295 -- # local -ga e810 00:23:47.249 14:05:37 -- nvmf/common.sh@296 -- # x722=() 00:23:47.249 14:05:37 -- nvmf/common.sh@296 -- # local -ga x722 00:23:47.249 14:05:37 -- nvmf/common.sh@297 -- # mlx=() 00:23:47.250 14:05:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:47.250 14:05:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:47.250 14:05:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:47.250 14:05:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:47.250 14:05:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:47.250 14:05:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:47.250 14:05:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:47.250 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:47.250 14:05:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:47.250 14:05:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:47.250 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:47.250 14:05:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:47.250 14:05:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:47.250 14:05:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.250 14:05:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:47.250 14:05:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.250 14:05:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:47.250 Found net devices under 0000:86:00.0: cvl_0_0 00:23:47.250 14:05:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.250 14:05:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:47.250 14:05:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:47.250 14:05:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:47.250 14:05:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:47.250 14:05:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:47.250 Found net devices under 0000:86:00.1: cvl_0_1 00:23:47.250 14:05:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:47.250 14:05:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:47.250 14:05:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:47.250 14:05:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:47.250 14:05:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:47.250 14:05:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:47.250 14:05:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:47.250 14:05:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:47.250 14:05:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:47.250 14:05:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:47.250 14:05:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:47.250 14:05:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:47.250 14:05:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:47.250 14:05:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:47.250 14:05:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:47.250 14:05:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:47.250 14:05:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:47.250 14:05:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:47.250 14:05:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:47.250 14:05:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:47.250 14:05:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:47.250 14:05:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:47.250 14:05:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:47.250 14:05:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:47.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:47.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:23:47.250 00:23:47.250 --- 10.0.0.2 ping statistics --- 00:23:47.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.250 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:23:47.250 14:05:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:47.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:47.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:23:47.250 00:23:47.250 --- 10.0.0.1 ping statistics --- 00:23:47.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:47.250 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:23:47.250 14:05:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:47.250 14:05:37 -- nvmf/common.sh@410 -- # return 0 00:23:47.250 14:05:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:47.250 14:05:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:47.250 14:05:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:47.250 14:05:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:47.250 14:05:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:47.250 14:05:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:47.250 14:05:37 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:47.250 14:05:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:47.250 14:05:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:47.250 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.250 14:05:37 -- nvmf/common.sh@469 -- # nvmfpid=3350212 00:23:47.250 14:05:37 -- nvmf/common.sh@470 -- # waitforlisten 3350212 00:23:47.250 14:05:37 -- common/autotest_common.sh@819 -- # '[' -z 3350212 ']' 00:23:47.250 14:05:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.250 14:05:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:47.250 14:05:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.250 14:05:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:47.250 14:05:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:47.250 14:05:37 -- common/autotest_common.sh@10 -- # set +x 00:23:47.250 [2024-07-23 14:05:37.925618] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:47.250 [2024-07-23 14:05:37.925659] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.250 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.250 [2024-07-23 14:05:37.982563] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:47.250 [2024-07-23 14:05:38.061118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:47.250 [2024-07-23 14:05:38.061224] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.250 [2024-07-23 14:05:38.061232] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.250 [2024-07-23 14:05:38.061238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.250 [2024-07-23 14:05:38.061336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.250 [2024-07-23 14:05:38.061355] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.250 [2024-07-23 14:05:38.061467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.250 [2024-07-23 14:05:38.061469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:47.818 14:05:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:47.818 14:05:38 -- common/autotest_common.sh@852 -- # return 0 00:23:47.818 14:05:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:47.818 14:05:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:47.818 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 14:05:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.818 14:05:38 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:47.818 14:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:47.818 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 [2024-07-23 14:05:38.782419] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.818 14:05:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:47.818 14:05:38 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:47.818 14:05:38 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:47.818 14:05:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:47.818 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:47.818 14:05:38 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:47.818 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:47.818 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:48.077 14:05:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:48.077 14:05:38 -- target/shutdown.sh@28 -- # cat 00:23:48.077 14:05:38 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:48.077 14:05:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:48.077 14:05:38 -- common/autotest_common.sh@10 -- # set +x 00:23:48.077 Malloc1 00:23:48.077 [2024-07-23 14:05:38.882203] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.077 Malloc2 00:23:48.077 Malloc3 00:23:48.077 Malloc4 00:23:48.077 Malloc5 00:23:48.077 Malloc6 00:23:48.337 Malloc7 00:23:48.337 Malloc8 00:23:48.337 Malloc9 00:23:48.337 Malloc10 00:23:48.337 14:05:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:48.337 14:05:39 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:48.337 14:05:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:48.337 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.337 14:05:39 -- target/shutdown.sh@78 -- # perfpid=3350495 00:23:48.337 14:05:39 -- target/shutdown.sh@79 -- # waitforlisten 3350495 /var/tmp/bdevperf.sock 00:23:48.337 14:05:39 -- common/autotest_common.sh@819 -- # '[' -z 3350495 ']' 00:23:48.337 14:05:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.337 14:05:39 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:48.337 14:05:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:48.337 14:05:39 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:48.337 14:05:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.337 14:05:39 -- nvmf/common.sh@520 -- # config=() 00:23:48.337 14:05:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:48.337 14:05:39 -- common/autotest_common.sh@10 -- # set +x 00:23:48.337 14:05:39 -- nvmf/common.sh@520 -- # local subsystem config 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.337 [2024-07-23 14:05:39.346541] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:48.337 [2024-07-23 14:05:39.346590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:48.337 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.337 { 00:23:48.337 "params": { 00:23:48.337 "name": "Nvme$subsystem", 00:23:48.337 "trtype": "$TEST_TRANSPORT", 00:23:48.337 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.337 "adrfam": "ipv4", 00:23:48.337 "trsvcid": "$NVMF_PORT", 00:23:48.337 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.337 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.337 "hdgst": ${hdgst:-false}, 00:23:48.337 "ddgst": ${ddgst:-false} 00:23:48.337 }, 00:23:48.337 "method": "bdev_nvme_attach_controller" 00:23:48.337 } 00:23:48.337 EOF 00:23:48.337 )") 00:23:48.337 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.597 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.597 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.597 { 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme$subsystem", 00:23:48.597 "trtype": "$TEST_TRANSPORT", 00:23:48.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "$NVMF_PORT", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.597 "hdgst": ${hdgst:-false}, 00:23:48.597 "ddgst": ${ddgst:-false} 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 } 00:23:48.597 EOF 00:23:48.597 )") 00:23:48.597 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.597 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.597 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.597 { 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme$subsystem", 00:23:48.597 "trtype": "$TEST_TRANSPORT", 00:23:48.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "$NVMF_PORT", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.597 "hdgst": ${hdgst:-false}, 00:23:48.597 "ddgst": ${ddgst:-false} 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 } 00:23:48.597 EOF 00:23:48.597 )") 00:23:48.597 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.597 14:05:39 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:48.597 14:05:39 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:48.597 { 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme$subsystem", 00:23:48.597 "trtype": "$TEST_TRANSPORT", 00:23:48.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "$NVMF_PORT", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.597 "hdgst": ${hdgst:-false}, 00:23:48.597 "ddgst": ${ddgst:-false} 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 } 00:23:48.597 EOF 00:23:48.597 )") 00:23:48.597 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.597 14:05:39 -- nvmf/common.sh@542 -- # cat 00:23:48.597 14:05:39 -- nvmf/common.sh@544 -- # jq . 00:23:48.597 14:05:39 -- nvmf/common.sh@545 -- # IFS=, 00:23:48.597 14:05:39 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme1", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme2", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme3", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme4", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme5", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme6", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme7", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme8", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme9", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 },{ 00:23:48.597 "params": { 00:23:48.597 "name": "Nvme10", 00:23:48.597 "trtype": "tcp", 00:23:48.597 "traddr": "10.0.0.2", 00:23:48.597 "adrfam": "ipv4", 00:23:48.597 "trsvcid": "4420", 00:23:48.597 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:48.597 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:48.597 "hdgst": false, 00:23:48.597 "ddgst": false 00:23:48.597 }, 00:23:48.597 "method": "bdev_nvme_attach_controller" 00:23:48.597 }' 00:23:48.597 [2024-07-23 14:05:39.403324] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.597 [2024-07-23 14:05:39.474596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.500 14:05:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:50.500 14:05:41 -- common/autotest_common.sh@852 -- # return 0 00:23:50.500 14:05:41 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:50.500 14:05:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:50.500 14:05:41 -- common/autotest_common.sh@10 -- # set +x 00:23:50.500 14:05:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:50.500 14:05:41 -- target/shutdown.sh@83 -- # kill -9 3350495 00:23:50.500 14:05:41 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:50.500 14:05:41 -- target/shutdown.sh@87 -- # sleep 1 00:23:51.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3350495 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:51.881 14:05:42 -- target/shutdown.sh@88 -- # kill -0 3350212 00:23:51.881 14:05:42 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:51.881 14:05:42 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.881 14:05:42 -- nvmf/common.sh@520 -- # config=() 00:23:51.881 14:05:42 -- nvmf/common.sh@520 -- # local subsystem config 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.881 )") 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.881 )") 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.881 )") 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.881 )") 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.881 )") 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.881 )") 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.881 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.881 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.881 { 00:23:51.881 "params": { 00:23:51.881 "name": "Nvme$subsystem", 00:23:51.881 "trtype": "$TEST_TRANSPORT", 00:23:51.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.881 "adrfam": "ipv4", 00:23:51.881 "trsvcid": "$NVMF_PORT", 00:23:51.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.881 "hdgst": ${hdgst:-false}, 00:23:51.881 "ddgst": ${ddgst:-false} 00:23:51.881 }, 00:23:51.881 "method": "bdev_nvme_attach_controller" 00:23:51.881 } 00:23:51.881 EOF 00:23:51.882 )") 00:23:51.882 [2024-07-23 14:05:42.557811] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:51.882 [2024-07-23 14:05:42.557856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350996 ] 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.882 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.882 { 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme$subsystem", 00:23:51.882 "trtype": "$TEST_TRANSPORT", 00:23:51.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "$NVMF_PORT", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.882 "hdgst": ${hdgst:-false}, 00:23:51.882 "ddgst": ${ddgst:-false} 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 } 00:23:51.882 EOF 00:23:51.882 )") 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.882 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.882 { 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme$subsystem", 00:23:51.882 "trtype": "$TEST_TRANSPORT", 00:23:51.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "$NVMF_PORT", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.882 "hdgst": ${hdgst:-false}, 00:23:51.882 "ddgst": ${ddgst:-false} 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 } 00:23:51.882 EOF 00:23:51.882 )") 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.882 14:05:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:51.882 { 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme$subsystem", 00:23:51.882 "trtype": "$TEST_TRANSPORT", 00:23:51.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "$NVMF_PORT", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.882 "hdgst": ${hdgst:-false}, 00:23:51.882 "ddgst": ${ddgst:-false} 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 } 00:23:51.882 EOF 00:23:51.882 )") 00:23:51.882 14:05:42 -- nvmf/common.sh@542 -- # cat 00:23:51.882 14:05:42 -- nvmf/common.sh@544 -- # jq . 00:23:51.882 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.882 14:05:42 -- nvmf/common.sh@545 -- # IFS=, 00:23:51.882 14:05:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme1", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme2", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme3", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme4", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme5", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme6", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme7", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme8", 00:23:51.882 "trtype": "tcp", 00:23:51.882 "traddr": "10.0.0.2", 00:23:51.882 "adrfam": "ipv4", 00:23:51.882 "trsvcid": "4420", 00:23:51.882 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:51.882 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:51.882 "hdgst": false, 00:23:51.882 "ddgst": false 00:23:51.882 }, 00:23:51.882 "method": "bdev_nvme_attach_controller" 00:23:51.882 },{ 00:23:51.882 "params": { 00:23:51.882 "name": "Nvme9", 00:23:51.883 "trtype": "tcp", 00:23:51.883 "traddr": "10.0.0.2", 00:23:51.883 "adrfam": "ipv4", 00:23:51.883 "trsvcid": "4420", 00:23:51.883 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:51.883 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:51.883 "hdgst": false, 00:23:51.883 "ddgst": false 00:23:51.883 }, 00:23:51.883 "method": "bdev_nvme_attach_controller" 00:23:51.883 },{ 00:23:51.883 "params": { 00:23:51.883 "name": "Nvme10", 00:23:51.883 "trtype": "tcp", 00:23:51.883 "traddr": "10.0.0.2", 00:23:51.883 "adrfam": "ipv4", 00:23:51.883 "trsvcid": "4420", 00:23:51.883 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:51.883 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:51.883 "hdgst": false, 00:23:51.883 "ddgst": false 00:23:51.883 }, 00:23:51.883 "method": "bdev_nvme_attach_controller" 00:23:51.883 }' 00:23:51.883 [2024-07-23 14:05:42.615609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.883 [2024-07-23 14:05:42.687539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.260 Running I/O for 1 seconds... 00:23:54.639 00:23:54.639 Latency(us) 00:23:54.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.639 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme1n1 : 1.11 471.75 29.48 0.00 0.00 128990.58 10599.74 111696.14 00:23:54.639 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme2n1 : 1.06 492.12 30.76 0.00 0.00 126305.14 24390.79 108048.92 00:23:54.639 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme3n1 : 1.08 448.15 28.01 0.00 0.00 138827.65 17666.23 128564.54 00:23:54.639 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme4n1 : 1.08 487.53 30.47 0.00 0.00 126927.63 8719.14 96651.35 00:23:54.639 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme5n1 : 1.08 491.41 30.71 0.00 0.00 125399.14 9801.91 122181.90 00:23:54.639 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme6n1 : 1.07 406.53 25.41 0.00 0.00 150126.29 12765.27 122181.90 00:23:54.639 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme7n1 : 1.09 484.12 30.26 0.00 0.00 125778.96 12252.38 104857.60 00:23:54.639 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme8n1 : 1.09 443.89 27.74 0.00 0.00 136040.09 16640.45 113519.75 00:23:54.639 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme9n1 : 1.09 480.66 30.04 0.00 0.00 125496.65 4245.59 105313.50 00:23:54.639 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.639 Verification LBA range: start 0x0 length 0x400 00:23:54.639 Nvme10n1 : 1.13 460.86 28.80 0.00 0.00 125564.65 8605.16 115343.36 00:23:54.639 =================================================================================================================== 00:23:54.639 Total : 4667.01 291.69 0.00 0.00 130492.43 4245.59 128564.54 00:23:54.639 14:05:45 -- target/shutdown.sh@93 -- # stoptarget 00:23:54.639 14:05:45 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:54.639 14:05:45 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:54.639 14:05:45 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:54.639 14:05:45 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:54.639 14:05:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:54.639 14:05:45 -- nvmf/common.sh@116 -- # sync 00:23:54.639 14:05:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:54.639 14:05:45 -- nvmf/common.sh@119 -- # set +e 00:23:54.639 14:05:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:54.639 14:05:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:54.639 rmmod nvme_tcp 00:23:54.639 rmmod nvme_fabrics 00:23:54.639 rmmod nvme_keyring 00:23:54.639 14:05:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:54.639 14:05:45 -- nvmf/common.sh@123 -- # set -e 00:23:54.639 14:05:45 -- nvmf/common.sh@124 -- # return 0 00:23:54.639 14:05:45 -- nvmf/common.sh@477 -- # '[' -n 3350212 ']' 00:23:54.639 14:05:45 -- nvmf/common.sh@478 -- # killprocess 3350212 00:23:54.639 14:05:45 -- common/autotest_common.sh@926 -- # '[' -z 3350212 ']' 00:23:54.639 14:05:45 -- common/autotest_common.sh@930 -- # kill -0 3350212 00:23:54.639 14:05:45 -- common/autotest_common.sh@931 -- # uname 00:23:54.639 14:05:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:54.639 14:05:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3350212 00:23:54.639 14:05:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:54.639 14:05:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:54.639 14:05:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3350212' 00:23:54.639 killing process with pid 3350212 00:23:54.639 14:05:45 -- common/autotest_common.sh@945 -- # kill 3350212 00:23:54.639 14:05:45 -- common/autotest_common.sh@950 -- # wait 3350212 00:23:55.208 14:05:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:55.208 14:05:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:55.208 14:05:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:55.208 14:05:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:55.208 14:05:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:55.208 14:05:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.208 14:05:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.208 14:05:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.117 14:05:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:23:57.117 00:23:57.117 real 0m15.372s 00:23:57.117 user 0m38.064s 00:23:57.117 sys 0m5.329s 00:23:57.117 14:05:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:57.117 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.117 ************************************ 00:23:57.117 END TEST nvmf_shutdown_tc1 00:23:57.117 ************************************ 00:23:57.377 14:05:48 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:57.377 14:05:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:23:57.377 14:05:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:57.377 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.377 ************************************ 00:23:57.377 START TEST nvmf_shutdown_tc2 00:23:57.377 ************************************ 00:23:57.377 14:05:48 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:23:57.377 14:05:48 -- target/shutdown.sh@98 -- # starttarget 00:23:57.377 14:05:48 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:57.377 14:05:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:57.377 14:05:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.378 14:05:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:57.378 14:05:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:57.378 14:05:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:57.378 14:05:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.378 14:05:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.378 14:05:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.378 14:05:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:57.378 14:05:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:57.378 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.378 14:05:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:57.378 14:05:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:57.378 14:05:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:57.378 14:05:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:57.378 14:05:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:57.378 14:05:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:57.378 14:05:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:57.378 14:05:48 -- nvmf/common.sh@294 -- # net_devs=() 00:23:57.378 14:05:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:57.378 14:05:48 -- nvmf/common.sh@295 -- # e810=() 00:23:57.378 14:05:48 -- nvmf/common.sh@295 -- # local -ga e810 00:23:57.378 14:05:48 -- nvmf/common.sh@296 -- # x722=() 00:23:57.378 14:05:48 -- nvmf/common.sh@296 -- # local -ga x722 00:23:57.378 14:05:48 -- nvmf/common.sh@297 -- # mlx=() 00:23:57.378 14:05:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:57.378 14:05:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.378 14:05:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:57.378 14:05:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:57.378 14:05:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:57.378 14:05:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.378 14:05:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.378 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.378 14:05:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:57.378 14:05:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.378 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.378 14:05:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:57.378 14:05:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.378 14:05:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.378 14:05:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.378 14:05:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.378 14:05:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.378 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.378 14:05:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.378 14:05:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:57.378 14:05:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.378 14:05:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:57.378 14:05:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.378 14:05:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.378 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.378 14:05:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.378 14:05:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:57.378 14:05:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:57.378 14:05:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:57.378 14:05:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:57.378 14:05:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.378 14:05:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.378 14:05:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:57.378 14:05:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:57.378 14:05:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:57.378 14:05:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:57.378 14:05:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:57.378 14:05:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:57.378 14:05:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.378 14:05:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:57.378 14:05:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:57.378 14:05:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:57.378 14:05:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:57.378 14:05:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:57.378 14:05:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:57.378 14:05:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:57.378 14:05:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:57.639 14:05:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:57.639 14:05:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:57.639 14:05:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:57.639 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.639 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:57.639 00:23:57.639 --- 10.0.0.2 ping statistics --- 00:23:57.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.639 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:57.639 14:05:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:57.639 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.639 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:23:57.639 00:23:57.639 --- 10.0.0.1 ping statistics --- 00:23:57.639 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.639 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:23:57.639 14:05:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.639 14:05:48 -- nvmf/common.sh@410 -- # return 0 00:23:57.639 14:05:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:57.639 14:05:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.639 14:05:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:57.639 14:05:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:57.639 14:05:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.639 14:05:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:57.639 14:05:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:57.639 14:05:48 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:57.639 14:05:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:57.639 14:05:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:57.639 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.639 14:05:48 -- nvmf/common.sh@469 -- # nvmfpid=3352042 00:23:57.639 14:05:48 -- nvmf/common.sh@470 -- # waitforlisten 3352042 00:23:57.639 14:05:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:57.639 14:05:48 -- common/autotest_common.sh@819 -- # '[' -z 3352042 ']' 00:23:57.639 14:05:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.639 14:05:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:57.639 14:05:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.639 14:05:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:57.639 14:05:48 -- common/autotest_common.sh@10 -- # set +x 00:23:57.639 [2024-07-23 14:05:48.538625] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:57.639 [2024-07-23 14:05:48.538669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.639 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.639 [2024-07-23 14:05:48.598927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.899 [2024-07-23 14:05:48.673177] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:57.899 [2024-07-23 14:05:48.673288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.899 [2024-07-23 14:05:48.673295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.899 [2024-07-23 14:05:48.673301] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.899 [2024-07-23 14:05:48.673400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.899 [2024-07-23 14:05:48.673483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.899 [2024-07-23 14:05:48.673593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.899 [2024-07-23 14:05:48.673594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:58.468 14:05:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:58.468 14:05:49 -- common/autotest_common.sh@852 -- # return 0 00:23:58.468 14:05:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.468 14:05:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:58.468 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.468 14:05:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.468 14:05:49 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:58.468 14:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.468 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.468 [2024-07-23 14:05:49.383286] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.468 14:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.468 14:05:49 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:58.468 14:05:49 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:58.468 14:05:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:58.468 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.469 14:05:49 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:58.469 14:05:49 -- target/shutdown.sh@28 -- # cat 00:23:58.469 14:05:49 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:58.469 14:05:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:58.469 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.469 Malloc1 00:23:58.469 [2024-07-23 14:05:49.478954] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.728 Malloc2 00:23:58.728 Malloc3 00:23:58.728 Malloc4 00:23:58.728 Malloc5 00:23:58.728 Malloc6 00:23:58.728 Malloc7 00:23:58.989 Malloc8 00:23:58.989 Malloc9 00:23:58.989 Malloc10 00:23:58.989 14:05:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:58.989 14:05:49 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:58.989 14:05:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:58.989 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.989 14:05:49 -- target/shutdown.sh@102 -- # perfpid=3352323 00:23:58.989 14:05:49 -- target/shutdown.sh@103 -- # waitforlisten 3352323 /var/tmp/bdevperf.sock 00:23:58.989 14:05:49 -- common/autotest_common.sh@819 -- # '[' -z 3352323 ']' 00:23:58.989 14:05:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:58.989 14:05:49 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:58.989 14:05:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:58.989 14:05:49 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:58.989 14:05:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:58.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:58.989 14:05:49 -- nvmf/common.sh@520 -- # config=() 00:23:58.989 14:05:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:58.989 14:05:49 -- nvmf/common.sh@520 -- # local subsystem config 00:23:58.989 14:05:49 -- common/autotest_common.sh@10 -- # set +x 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 [2024-07-23 14:05:49.950774] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:23:58.989 [2024-07-23 14:05:49.950820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3352323 ] 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.989 "ddgst": ${ddgst:-false} 00:23:58.989 }, 00:23:58.989 "method": "bdev_nvme_attach_controller" 00:23:58.989 } 00:23:58.989 EOF 00:23:58.989 )") 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.989 14:05:49 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:58.989 14:05:49 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:58.989 { 00:23:58.989 "params": { 00:23:58.989 "name": "Nvme$subsystem", 00:23:58.989 "trtype": "$TEST_TRANSPORT", 00:23:58.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:58.989 "adrfam": "ipv4", 00:23:58.989 "trsvcid": "$NVMF_PORT", 00:23:58.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:58.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:58.989 "hdgst": ${hdgst:-false}, 00:23:58.990 "ddgst": ${ddgst:-false} 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 } 00:23:58.990 EOF 00:23:58.990 )") 00:23:58.990 14:05:49 -- nvmf/common.sh@542 -- # cat 00:23:58.990 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.990 14:05:49 -- nvmf/common.sh@544 -- # jq . 00:23:58.990 14:05:49 -- nvmf/common.sh@545 -- # IFS=, 00:23:58.990 14:05:49 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme1", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme2", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme3", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme4", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme5", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme6", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme7", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme8", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme9", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 },{ 00:23:58.990 "params": { 00:23:58.990 "name": "Nvme10", 00:23:58.990 "trtype": "tcp", 00:23:58.990 "traddr": "10.0.0.2", 00:23:58.990 "adrfam": "ipv4", 00:23:58.990 "trsvcid": "4420", 00:23:58.990 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:58.990 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:58.990 "hdgst": false, 00:23:58.990 "ddgst": false 00:23:58.990 }, 00:23:58.990 "method": "bdev_nvme_attach_controller" 00:23:58.990 }' 00:23:59.250 [2024-07-23 14:05:50.006410] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.250 [2024-07-23 14:05:50.083489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.629 Running I/O for 10 seconds... 00:24:01.197 14:05:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:01.197 14:05:52 -- common/autotest_common.sh@852 -- # return 0 00:24:01.198 14:05:52 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:01.198 14:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:01.198 14:05:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.198 14:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:01.198 14:05:52 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:01.198 14:05:52 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:01.198 14:05:52 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:01.198 14:05:52 -- target/shutdown.sh@57 -- # local ret=1 00:24:01.198 14:05:52 -- target/shutdown.sh@58 -- # local i 00:24:01.198 14:05:52 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:01.198 14:05:52 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:01.198 14:05:52 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:01.198 14:05:52 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:01.198 14:05:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:01.198 14:05:52 -- common/autotest_common.sh@10 -- # set +x 00:24:01.198 14:05:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:01.198 14:05:52 -- target/shutdown.sh@60 -- # read_io_count=211 00:24:01.198 14:05:52 -- target/shutdown.sh@63 -- # '[' 211 -ge 100 ']' 00:24:01.198 14:05:52 -- target/shutdown.sh@64 -- # ret=0 00:24:01.198 14:05:52 -- target/shutdown.sh@65 -- # break 00:24:01.198 14:05:52 -- target/shutdown.sh@69 -- # return 0 00:24:01.198 14:05:52 -- target/shutdown.sh@109 -- # killprocess 3352323 00:24:01.198 14:05:52 -- common/autotest_common.sh@926 -- # '[' -z 3352323 ']' 00:24:01.198 14:05:52 -- common/autotest_common.sh@930 -- # kill -0 3352323 00:24:01.198 14:05:52 -- common/autotest_common.sh@931 -- # uname 00:24:01.198 14:05:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:01.198 14:05:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3352323 00:24:01.458 14:05:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:01.458 14:05:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:01.458 14:05:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3352323' 00:24:01.458 killing process with pid 3352323 00:24:01.458 14:05:52 -- common/autotest_common.sh@945 -- # kill 3352323 00:24:01.458 14:05:52 -- common/autotest_common.sh@950 -- # wait 3352323 00:24:01.458 Received shutdown signal, test time was about 0.676561 seconds 00:24:01.458 00:24:01.458 Latency(us) 00:24:01.458 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.458 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme1n1 : 0.66 477.79 29.86 0.00 0.00 130899.32 14474.91 126740.93 00:24:01.458 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme2n1 : 0.66 540.90 33.81 0.00 0.00 113913.16 10200.82 102578.09 00:24:01.458 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme3n1 : 0.67 473.11 29.57 0.00 0.00 129999.36 12708.29 117622.87 00:24:01.458 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme4n1 : 0.66 538.71 33.67 0.00 0.00 111916.50 6781.55 101210.38 00:24:01.458 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme5n1 : 0.68 402.45 25.15 0.00 0.00 139684.49 19831.76 110784.33 00:24:01.458 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme6n1 : 0.65 415.63 25.98 0.00 0.00 142386.07 14531.90 113975.65 00:24:01.458 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme7n1 : 0.64 426.41 26.65 0.00 0.00 136811.64 16070.57 109872.53 00:24:01.458 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme8n1 : 0.65 485.83 30.36 0.00 0.00 119188.86 17780.20 103033.99 00:24:01.458 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme9n1 : 0.65 488.03 30.50 0.00 0.00 117326.17 16184.54 116255.17 00:24:01.458 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.458 Verification LBA range: start 0x0 length 0x400 00:24:01.458 Nvme10n1 : 0.65 367.44 22.96 0.00 0.00 151563.16 7522.39 135858.98 00:24:01.458 =================================================================================================================== 00:24:01.458 Total : 4616.29 288.52 0.00 0.00 127947.77 6781.55 135858.98 00:24:01.718 14:05:52 -- target/shutdown.sh@112 -- # sleep 1 00:24:02.656 14:05:53 -- target/shutdown.sh@113 -- # kill -0 3352042 00:24:02.656 14:05:53 -- target/shutdown.sh@115 -- # stoptarget 00:24:02.656 14:05:53 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:02.656 14:05:53 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:02.656 14:05:53 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:02.656 14:05:53 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:02.656 14:05:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:02.656 14:05:53 -- nvmf/common.sh@116 -- # sync 00:24:02.656 14:05:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:02.656 14:05:53 -- nvmf/common.sh@119 -- # set +e 00:24:02.657 14:05:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:02.657 14:05:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:02.657 rmmod nvme_tcp 00:24:02.657 rmmod nvme_fabrics 00:24:02.657 rmmod nvme_keyring 00:24:02.657 14:05:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:02.657 14:05:53 -- nvmf/common.sh@123 -- # set -e 00:24:02.657 14:05:53 -- nvmf/common.sh@124 -- # return 0 00:24:02.657 14:05:53 -- nvmf/common.sh@477 -- # '[' -n 3352042 ']' 00:24:02.657 14:05:53 -- nvmf/common.sh@478 -- # killprocess 3352042 00:24:02.657 14:05:53 -- common/autotest_common.sh@926 -- # '[' -z 3352042 ']' 00:24:02.657 14:05:53 -- common/autotest_common.sh@930 -- # kill -0 3352042 00:24:02.657 14:05:53 -- common/autotest_common.sh@931 -- # uname 00:24:02.657 14:05:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:02.657 14:05:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3352042 00:24:02.657 14:05:53 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:02.657 14:05:53 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:02.657 14:05:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3352042' 00:24:02.657 killing process with pid 3352042 00:24:02.657 14:05:53 -- common/autotest_common.sh@945 -- # kill 3352042 00:24:02.657 14:05:53 -- common/autotest_common.sh@950 -- # wait 3352042 00:24:03.270 14:05:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:03.270 14:05:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:03.270 14:05:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:03.270 14:05:54 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.270 14:05:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:03.270 14:05:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.270 14:05:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.270 14:05:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.181 14:05:56 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:05.181 00:24:05.181 real 0m7.970s 00:24:05.181 user 0m24.142s 00:24:05.181 sys 0m1.366s 00:24:05.181 14:05:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:05.181 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.181 ************************************ 00:24:05.181 END TEST nvmf_shutdown_tc2 00:24:05.181 ************************************ 00:24:05.181 14:05:56 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:05.181 14:05:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:05.181 14:05:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:05.181 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.181 ************************************ 00:24:05.181 START TEST nvmf_shutdown_tc3 00:24:05.181 ************************************ 00:24:05.181 14:05:56 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:24:05.181 14:05:56 -- target/shutdown.sh@120 -- # starttarget 00:24:05.181 14:05:56 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:05.181 14:05:56 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:05.181 14:05:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.181 14:05:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:05.181 14:05:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:05.181 14:05:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:05.181 14:05:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.181 14:05:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.181 14:05:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.181 14:05:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:05.181 14:05:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:05.181 14:05:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:05.181 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.181 14:05:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:05.181 14:05:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:05.181 14:05:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:05.181 14:05:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:05.181 14:05:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:05.181 14:05:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:05.181 14:05:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:05.181 14:05:56 -- nvmf/common.sh@294 -- # net_devs=() 00:24:05.181 14:05:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:05.181 14:05:56 -- nvmf/common.sh@295 -- # e810=() 00:24:05.181 14:05:56 -- nvmf/common.sh@295 -- # local -ga e810 00:24:05.181 14:05:56 -- nvmf/common.sh@296 -- # x722=() 00:24:05.181 14:05:56 -- nvmf/common.sh@296 -- # local -ga x722 00:24:05.181 14:05:56 -- nvmf/common.sh@297 -- # mlx=() 00:24:05.181 14:05:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:05.181 14:05:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.181 14:05:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.181 14:05:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.181 14:05:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.442 14:05:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:05.442 14:05:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:05.442 14:05:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:05.442 14:05:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.442 14:05:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.442 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.442 14:05:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.442 14:05:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.442 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.442 14:05:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:05.442 14:05:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.442 14:05:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.442 14:05:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.442 14:05:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.442 14:05:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.442 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.442 14:05:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.442 14:05:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.442 14:05:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.442 14:05:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.442 14:05:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.442 14:05:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.442 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.442 14:05:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.442 14:05:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:05.442 14:05:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:05.442 14:05:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:05.442 14:05:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:05.442 14:05:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.442 14:05:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.442 14:05:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.443 14:05:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:05.443 14:05:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.443 14:05:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.443 14:05:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:05.443 14:05:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.443 14:05:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.443 14:05:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:05.443 14:05:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:05.443 14:05:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.443 14:05:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.443 14:05:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.443 14:05:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.443 14:05:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:05.443 14:05:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.443 14:05:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.443 14:05:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.443 14:05:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:05.703 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.703 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:24:05.703 00:24:05.703 --- 10.0.0.2 ping statistics --- 00:24:05.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.703 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:05.703 14:05:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.703 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.703 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.269 ms 00:24:05.703 00:24:05.703 --- 10.0.0.1 ping statistics --- 00:24:05.703 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.703 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:24:05.703 14:05:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.703 14:05:56 -- nvmf/common.sh@410 -- # return 0 00:24:05.703 14:05:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:05.703 14:05:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.703 14:05:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:05.703 14:05:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:05.703 14:05:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.703 14:05:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:05.703 14:05:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:05.703 14:05:56 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:05.703 14:05:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:05.703 14:05:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:05.703 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.703 14:05:56 -- nvmf/common.sh@469 -- # nvmfpid=3353605 00:24:05.703 14:05:56 -- nvmf/common.sh@470 -- # waitforlisten 3353605 00:24:05.703 14:05:56 -- common/autotest_common.sh@819 -- # '[' -z 3353605 ']' 00:24:05.703 14:05:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:05.703 14:05:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.703 14:05:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:05.703 14:05:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.703 14:05:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:05.703 14:05:56 -- common/autotest_common.sh@10 -- # set +x 00:24:05.703 [2024-07-23 14:05:56.552836] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:05.703 [2024-07-23 14:05:56.552885] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.703 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.703 [2024-07-23 14:05:56.611323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.703 [2024-07-23 14:05:56.681466] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:05.703 [2024-07-23 14:05:56.681580] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.703 [2024-07-23 14:05:56.681588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.703 [2024-07-23 14:05:56.681594] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.703 [2024-07-23 14:05:56.681701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.703 [2024-07-23 14:05:56.681767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.703 [2024-07-23 14:05:56.681855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.703 [2024-07-23 14:05:56.681856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:06.645 14:05:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:06.645 14:05:57 -- common/autotest_common.sh@852 -- # return 0 00:24:06.645 14:05:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.645 14:05:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:06.645 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.645 14:05:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.645 14:05:57 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.645 14:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.645 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.645 [2024-07-23 14:05:57.391356] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.645 14:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.645 14:05:57 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:06.645 14:05:57 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:06.645 14:05:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:06.645 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.645 14:05:57 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.645 14:05:57 -- target/shutdown.sh@28 -- # cat 00:24:06.645 14:05:57 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:06.645 14:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:06.645 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.645 Malloc1 00:24:06.645 [2024-07-23 14:05:57.486956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.645 Malloc2 00:24:06.645 Malloc3 00:24:06.645 Malloc4 00:24:06.645 Malloc5 00:24:06.906 Malloc6 00:24:06.906 Malloc7 00:24:06.906 Malloc8 00:24:06.906 Malloc9 00:24:06.906 Malloc10 00:24:06.906 14:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:06.906 14:05:57 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:06.906 14:05:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:06.906 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.906 14:05:57 -- target/shutdown.sh@124 -- # perfpid=3353892 00:24:06.906 14:05:57 -- target/shutdown.sh@125 -- # waitforlisten 3353892 /var/tmp/bdevperf.sock 00:24:06.906 14:05:57 -- common/autotest_common.sh@819 -- # '[' -z 3353892 ']' 00:24:06.906 14:05:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:06.906 14:05:57 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:06.906 14:05:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:06.906 14:05:57 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:06.906 14:05:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:06.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:06.906 14:05:57 -- nvmf/common.sh@520 -- # config=() 00:24:06.906 14:05:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:06.906 14:05:57 -- nvmf/common.sh@520 -- # local subsystem config 00:24:06.906 14:05:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.906 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:06.906 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:06.906 { 00:24:06.906 "params": { 00:24:06.906 "name": "Nvme$subsystem", 00:24:06.906 "trtype": "$TEST_TRANSPORT", 00:24:06.906 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.906 "adrfam": "ipv4", 00:24:06.906 "trsvcid": "$NVMF_PORT", 00:24:06.906 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.906 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.906 "hdgst": ${hdgst:-false}, 00:24:06.906 "ddgst": ${ddgst:-false} 00:24:06.906 }, 00:24:06.906 "method": "bdev_nvme_attach_controller" 00:24:06.906 } 00:24:06.906 EOF 00:24:06.906 )") 00:24:06.906 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.167 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.167 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.167 { 00:24:07.167 "params": { 00:24:07.167 "name": "Nvme$subsystem", 00:24:07.167 "trtype": "$TEST_TRANSPORT", 00:24:07.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.167 "adrfam": "ipv4", 00:24:07.167 "trsvcid": "$NVMF_PORT", 00:24:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.167 "hdgst": ${hdgst:-false}, 00:24:07.167 "ddgst": ${ddgst:-false} 00:24:07.167 }, 00:24:07.167 "method": "bdev_nvme_attach_controller" 00:24:07.167 } 00:24:07.167 EOF 00:24:07.167 )") 00:24:07.167 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.167 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.167 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.167 { 00:24:07.167 "params": { 00:24:07.167 "name": "Nvme$subsystem", 00:24:07.167 "trtype": "$TEST_TRANSPORT", 00:24:07.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.167 "adrfam": "ipv4", 00:24:07.167 "trsvcid": "$NVMF_PORT", 00:24:07.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.167 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 [2024-07-23 14:05:57.956939] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:07.168 [2024-07-23 14:05:57.956987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3353892 ] 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 14:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.168 { 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme$subsystem", 00:24:07.168 "trtype": "$TEST_TRANSPORT", 00:24:07.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "$NVMF_PORT", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.168 "hdgst": ${hdgst:-false}, 00:24:07.168 "ddgst": ${ddgst:-false} 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 } 00:24:07.168 EOF 00:24:07.168 )") 00:24:07.168 14:05:57 -- nvmf/common.sh@542 -- # cat 00:24:07.168 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.168 14:05:57 -- nvmf/common.sh@544 -- # jq . 00:24:07.168 14:05:57 -- nvmf/common.sh@545 -- # IFS=, 00:24:07.168 14:05:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme1", 00:24:07.168 "trtype": "tcp", 00:24:07.168 "traddr": "10.0.0.2", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "4420", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.168 "hdgst": false, 00:24:07.168 "ddgst": false 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 },{ 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme2", 00:24:07.168 "trtype": "tcp", 00:24:07.168 "traddr": "10.0.0.2", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "4420", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.168 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.168 "hdgst": false, 00:24:07.168 "ddgst": false 00:24:07.168 }, 00:24:07.168 "method": "bdev_nvme_attach_controller" 00:24:07.168 },{ 00:24:07.168 "params": { 00:24:07.168 "name": "Nvme3", 00:24:07.168 "trtype": "tcp", 00:24:07.168 "traddr": "10.0.0.2", 00:24:07.168 "adrfam": "ipv4", 00:24:07.168 "trsvcid": "4420", 00:24:07.168 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme4", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme5", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme6", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme7", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme8", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme9", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 },{ 00:24:07.169 "params": { 00:24:07.169 "name": "Nvme10", 00:24:07.169 "trtype": "tcp", 00:24:07.169 "traddr": "10.0.0.2", 00:24:07.169 "adrfam": "ipv4", 00:24:07.169 "trsvcid": "4420", 00:24:07.169 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:07.169 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:07.169 "hdgst": false, 00:24:07.169 "ddgst": false 00:24:07.169 }, 00:24:07.169 "method": "bdev_nvme_attach_controller" 00:24:07.169 }' 00:24:07.169 [2024-07-23 14:05:58.012841] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.169 [2024-07-23 14:05:58.083673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:09.081 Running I/O for 10 seconds... 00:24:09.359 14:06:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:09.359 14:06:00 -- common/autotest_common.sh@852 -- # return 0 00:24:09.359 14:06:00 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:09.359 14:06:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.359 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:24:09.359 14:06:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.359 14:06:00 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.359 14:06:00 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:09.359 14:06:00 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:09.359 14:06:00 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:09.359 14:06:00 -- target/shutdown.sh@57 -- # local ret=1 00:24:09.359 14:06:00 -- target/shutdown.sh@58 -- # local i 00:24:09.359 14:06:00 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:09.359 14:06:00 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:09.359 14:06:00 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:09.359 14:06:00 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:09.359 14:06:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:09.359 14:06:00 -- common/autotest_common.sh@10 -- # set +x 00:24:09.359 14:06:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:09.359 14:06:00 -- target/shutdown.sh@60 -- # read_io_count=133 00:24:09.359 14:06:00 -- target/shutdown.sh@63 -- # '[' 133 -ge 100 ']' 00:24:09.359 14:06:00 -- target/shutdown.sh@64 -- # ret=0 00:24:09.359 14:06:00 -- target/shutdown.sh@65 -- # break 00:24:09.359 14:06:00 -- target/shutdown.sh@69 -- # return 0 00:24:09.359 14:06:00 -- target/shutdown.sh@134 -- # killprocess 3353605 00:24:09.359 14:06:00 -- common/autotest_common.sh@926 -- # '[' -z 3353605 ']' 00:24:09.359 14:06:00 -- common/autotest_common.sh@930 -- # kill -0 3353605 00:24:09.359 14:06:00 -- common/autotest_common.sh@931 -- # uname 00:24:09.359 14:06:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:09.359 14:06:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3353605 00:24:09.359 14:06:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:09.359 14:06:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:09.359 14:06:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3353605' 00:24:09.359 killing process with pid 3353605 00:24:09.359 14:06:00 -- common/autotest_common.sh@945 -- # kill 3353605 00:24:09.359 14:06:00 -- common/autotest_common.sh@950 -- # wait 3353605 00:24:09.359 [2024-07-23 14:06:00.242703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242891] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242950] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.359 [2024-07-23 14:06:00.242964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.242969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.242976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.242982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.242989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.242996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243009] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243094] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.243178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a430 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244610] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244641] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244734] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244772] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.244784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173cde0 is same with the state(5) to be set 00:24:09.360 [2024-07-23 14:06:00.246093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.246178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173a8e0 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248140] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248199] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248205] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248249] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248328] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248394] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.361 [2024-07-23 14:06:00.248437] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.248533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b220 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249189] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249314] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249366] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249379] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249410] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249435] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249441] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.362 [2024-07-23 14:06:00.249522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249543] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.249567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173b6d0 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250508] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250666] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250712] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250786] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.363 [2024-07-23 14:06:00.250826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.250877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173bb80 is same with the state(5) to be set 00:24:09.364 [2024-07-23 14:06:00.251410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.364 [2024-07-23 14:06:00.251783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.364 [2024-07-23 14:06:00.251791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.251993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.251999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252054] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with [2024-07-23 14:06:00.252066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:09.365 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.365 [2024-07-23 14:06:00.252085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.365 [2024-07-23 14:06:00.252094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with [2024-07-23 14:06:00.252102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:24:09.365 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.365 [2024-07-23 14:06:00.252113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.365 [2024-07-23 14:06:00.252121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.365 [2024-07-23 14:06:00.252126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.365 [2024-07-23 14:06:00.252130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.365 [2024-07-23 14:06:00.252133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252142] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252149] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252192] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22400 len:12[2024-07-23 14:06:00.252238] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 14:06:00.252247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252264] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22912 len:12[2024-07-23 14:06:00.252310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 [2024-07-23 14:06:00.252359] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-23 14:06:00.252374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.366 the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.366 [2024-07-23 14:06:00.252390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.366 [2024-07-23 14:06:00.252392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.367 [2024-07-23 14:06:00.252405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252412] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.367 [2024-07-23 14:06:00.252421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.367 [2024-07-23 14:06:00.252440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:1[2024-07-23 14:06:00.252455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c010 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.367 the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.252464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.367 [2024-07-23 14:06:00.252480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.367 [2024-07-23 14:06:00.252495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252906] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2b83920 was disconnected and freed. reset controller. 00:24:09.367 [2024-07-23 14:06:00.252959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.252969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.252984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.252991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.252998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999f70 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3c8e0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253191] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1978710 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.367 [2024-07-23 14:06:00.253247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.367 [2024-07-23 14:06:00.253254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.367 [2024-07-23 14:06:00.253257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1994430 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253293] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with [2024-07-23 14:06:00.253299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(5) to be set 00:24:09.368 id:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253308] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc660 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253393] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with [2024-07-23 14:06:00.253422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(5) to be set 00:24:09.368 id:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with [2024-07-23 14:06:00.253432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:24:09.368 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b470 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253458] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253479] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-23 14:06:00.253487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:09.368 the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 14:06:00.253496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253513] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.368 [2024-07-23 14:06:00.253528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.368 [2024-07-23 14:06:00.253535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.368 [2024-07-23 14:06:00.253540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 14:06:00.253549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199c5e0 is same [2024-07-23 14:06:00.253557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with with the state(5) to be set 00:24:09.369 the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253566] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253578] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253591] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-07-23 14:06:00.253605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with id:0 cdw10:00000000 cdw11:00000000 00:24:09.369 the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-23 14:06:00.253613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253629] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with [2024-07-23 14:06:00.253654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3c160 is same the state(5) to be set 00:24:09.369 with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253662] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c4a0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.253679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.369 [2024-07-23 14:06:00.253728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.369 [2024-07-23 14:06:00.253736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19936b0 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254343] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254350] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254375] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254434] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.369 [2024-07-23 14:06:00.254471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254502] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254532] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254574] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x173c930 is same with the state(5) to be set 00:24:09.370 [2024-07-23 14:06:00.254826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.254989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.254995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.370 [2024-07-23 14:06:00.255170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.370 [2024-07-23 14:06:00.255176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.255375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.255383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.267981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.267991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.268014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.268037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.268078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.268100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.268121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.371 [2024-07-23 14:06:00.268146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.371 [2024-07-23 14:06:00.268158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19692c0 is same with the state(5) to be set 00:24:09.372 [2024-07-23 14:06:00.268489] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19692c0 was disconnected and freed. reset controller. 00:24:09.372 [2024-07-23 14:06:00.268749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.268980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.268991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.372 [2024-07-23 14:06:00.269187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.372 [2024-07-23 14:06:00.269197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.373 [2024-07-23 14:06:00.269965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.373 [2024-07-23 14:06:00.269975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.269987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.269996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.270159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.270257] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a22cf0 was disconnected and freed. reset controller. 00:24:09.374 [2024-07-23 14:06:00.271834] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1999f70 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.271875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c8e0 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.271892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1978710 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.271910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1994430 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.271930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc660 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.271948] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b470 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.271983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.374 [2024-07-23 14:06:00.271996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.374 [2024-07-23 14:06:00.272017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.374 [2024-07-23 14:06:00.272040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.374 [2024-07-23 14:06:00.272069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272078] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0c640 is same with the state(5) to be set 00:24:09.374 [2024-07-23 14:06:00.272099] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199c5e0 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.272120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c160 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.272136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19936b0 (9): Bad file descriptor 00:24:09.374 [2024-07-23 14:06:00.272262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.374 [2024-07-23 14:06:00.272663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.374 [2024-07-23 14:06:00.272675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.272978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.272990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.375 [2024-07-23 14:06:00.273375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.375 [2024-07-23 14:06:00.273388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.273728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.273819] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1968b80 was disconnected and freed. reset controller. 00:24:09.376 [2024-07-23 14:06:00.276646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.276981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.276991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.277013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.277036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.277065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.277088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.277112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.376 [2024-07-23 14:06:00.277135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.376 [2024-07-23 14:06:00.277147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.377 [2024-07-23 14:06:00.277964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.377 [2024-07-23 14:06:00.277976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.277987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.278157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.278250] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x29e0db0 was disconnected and freed. reset controller. 00:24:09.378 [2024-07-23 14:06:00.278293] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:09.378 [2024-07-23 14:06:00.278313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:09.378 [2024-07-23 14:06:00.281531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:09.378 [2024-07-23 14:06:00.281999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.378 [2024-07-23 14:06:00.282368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.378 [2024-07-23 14:06:00.282385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19936b0 with addr=10.0.0.2, port=4420 00:24:09.378 [2024-07-23 14:06:00.282398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19936b0 is same with the state(5) to be set 00:24:09.378 [2024-07-23 14:06:00.282742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.378 [2024-07-23 14:06:00.283046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.378 [2024-07-23 14:06:00.283058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197b470 with addr=10.0.0.2, port=4420 00:24:09.378 [2024-07-23 14:06:00.283066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b470 is same with the state(5) to be set 00:24:09.378 [2024-07-23 14:06:00.283114] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.378 [2024-07-23 14:06:00.283397] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.378 [2024-07-23 14:06:00.283431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.378 [2024-07-23 14:06:00.283807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.378 [2024-07-23 14:06:00.283814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.283984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.283992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.379 [2024-07-23 14:06:00.284408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.379 [2024-07-23 14:06:00.284415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.284530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.284538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7020 is same with the state(5) to be set 00:24:09.380 [2024-07-23 14:06:00.284592] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac7020 was disconnected and freed. reset controller. 00:24:09.380 [2024-07-23 14:06:00.284946] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:09.380 [2024-07-23 14:06:00.284967] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:09.380 [2024-07-23 14:06:00.284982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0c640 (9): Bad file descriptor 00:24:09.380 [2024-07-23 14:06:00.285362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.380 [2024-07-23 14:06:00.285709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.380 [2024-07-23 14:06:00.285720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18dc660 with addr=10.0.0.2, port=4420 00:24:09.380 [2024-07-23 14:06:00.285732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc660 is same with the state(5) to be set 00:24:09.380 [2024-07-23 14:06:00.285743] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19936b0 (9): Bad file descriptor 00:24:09.380 [2024-07-23 14:06:00.285753] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b470 (9): Bad file descriptor 00:24:09.380 [2024-07-23 14:06:00.285786] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.380 [2024-07-23 14:06:00.286997] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.380 [2024-07-23 14:06:00.287069] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.380 [2024-07-23 14:06:00.287337] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:09.380 [2024-07-23 14:06:00.287709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.380 [2024-07-23 14:06:00.288015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.380 [2024-07-23 14:06:00.288026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3c8e0 with addr=10.0.0.2, port=4420 00:24:09.380 [2024-07-23 14:06:00.288035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3c8e0 is same with the state(5) to be set 00:24:09.380 [2024-07-23 14:06:00.288060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc660 (9): Bad file descriptor 00:24:09.380 [2024-07-23 14:06:00.288072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:09.380 [2024-07-23 14:06:00.288080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:09.380 [2024-07-23 14:06:00.288088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:09.380 [2024-07-23 14:06:00.288102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:09.380 [2024-07-23 14:06:00.288110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:09.380 [2024-07-23 14:06:00.288117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:09.380 [2024-07-23 14:06:00.288159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.380 [2024-07-23 14:06:00.288305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.380 [2024-07-23 14:06:00.288312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.381 [2024-07-23 14:06:00.288790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.381 [2024-07-23 14:06:00.288799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.288985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.288994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.289237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.289246] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b35280 is same with the state(5) to be set 00:24:09.382 [2024-07-23 14:06:00.290250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.290263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.290274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.290282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.290292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.382 [2024-07-23 14:06:00.290300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.382 [2024-07-23 14:06:00.290312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.383 [2024-07-23 14:06:00.290812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.383 [2024-07-23 14:06:00.290820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.290993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.384 [2024-07-23 14:06:00.291317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.384 [2024-07-23 14:06:00.291324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.291332] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac5b70 is same with the state(5) to be set 00:24:09.385 [2024-07-23 14:06:00.292332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.385 [2024-07-23 14:06:00.292831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.385 [2024-07-23 14:06:00.292840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.292986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.292993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.386 [2024-07-23 14:06:00.293264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.386 [2024-07-23 14:06:00.293273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.293412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.293423] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a21710 is same with the state(5) to be set 00:24:09.387 [2024-07-23 14:06:00.294420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.387 [2024-07-23 14:06:00.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.387 [2024-07-23 14:06:00.294759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.294985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.294992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.388 [2024-07-23 14:06:00.295208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.388 [2024-07-23 14:06:00.295217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.389 [2024-07-23 14:06:00.295500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.389 [2024-07-23 14:06:00.295508] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b35b60 is same with the state(5) to be set 00:24:09.389 [2024-07-23 14:06:00.296969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.389 [2024-07-23 14:06:00.296985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.389 [2024-07-23 14:06:00.296993] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:09.389 [2024-07-23 14:06:00.297004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:09.389 [2024-07-23 14:06:00.297014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:09.389 [2024-07-23 14:06:00.297472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.389 [2024-07-23 14:06:00.297785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.389 [2024-07-23 14:06:00.297798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0c640 with addr=10.0.0.2, port=4420 00:24:09.389 [2024-07-23 14:06:00.297807] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0c640 is same with the state(5) to be set 00:24:09.389 [2024-07-23 14:06:00.298166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.389 [2024-07-23 14:06:00.298546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.389 [2024-07-23 14:06:00.298559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x199c5e0 with addr=10.0.0.2, port=4420 00:24:09.389 [2024-07-23 14:06:00.298567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199c5e0 is same with the state(5) to be set 00:24:09.389 [2024-07-23 14:06:00.298578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c8e0 (9): Bad file descriptor 00:24:09.389 [2024-07-23 14:06:00.298588] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:09.389 [2024-07-23 14:06:00.298595] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:09.389 [2024-07-23 14:06:00.298603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:09.389 [2024-07-23 14:06:00.298640] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.389 [2024-07-23 14:06:00.298657] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.389 [2024-07-23 14:06:00.298667] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.389 task offset: 18944 on job bdev=Nvme10n1 fails 00:24:09.389 00:24:09.389 Latency(us) 00:24:09.389 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.389 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.389 Job: Nvme1n1 ended in about 0.43 seconds with error 00:24:09.389 Verification LBA range: start 0x0 length 0x400 00:24:09.389 Nvme1n1 : 0.43 388.35 24.27 148.83 0.00 118006.00 11625.52 98930.87 00:24:09.389 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.389 Job: Nvme2n1 ended in about 0.42 seconds with error 00:24:09.389 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme2n1 : 0.42 391.12 24.44 152.63 0.00 114706.25 23820.91 106681.21 00:24:09.390 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme3n1 ended in about 0.41 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme3n1 : 0.41 395.43 24.71 154.31 0.00 111593.16 23251.03 106681.21 00:24:09.390 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme4n1 ended in about 0.43 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme4n1 : 0.43 291.60 18.22 148.11 0.00 137603.40 78415.25 131299.95 00:24:09.390 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme5n1 ended in about 0.43 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme5n1 : 0.43 295.19 18.45 149.94 0.00 133655.05 75679.83 118534.68 00:24:09.390 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme6n1 ended in about 0.43 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme6n1 : 0.43 290.20 18.14 147.40 0.00 133935.79 81150.66 124005.51 00:24:09.390 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme7n1 ended in about 0.42 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme7n1 : 0.42 302.75 18.92 153.78 0.00 125645.56 51516.99 113519.75 00:24:09.390 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme8n1 ended in about 0.44 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme8n1 : 0.44 288.82 18.05 146.70 0.00 130196.19 77959.35 106681.21 00:24:09.390 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme9n1 ended in about 0.42 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme9n1 : 0.42 299.42 18.71 152.09 0.00 122841.76 63370.46 113063.85 00:24:09.390 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.390 Job: Nvme10n1 ended in about 0.41 seconds with error 00:24:09.390 Verification LBA range: start 0x0 length 0x400 00:24:09.390 Nvme10n1 : 0.41 306.24 19.14 155.55 0.00 117607.28 39207.62 110328.43 00:24:09.390 =================================================================================================================== 00:24:09.390 Total : 3249.09 203.07 1509.33 0.00 124014.78 11625.52 131299.95 00:24:09.390 [2024-07-23 14:06:00.321415] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:09.390 [2024-07-23 14:06:00.321456] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:09.390 [2024-07-23 14:06:00.321471] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.390 [2024-07-23 14:06:00.321890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.322300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.322313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1978710 with addr=10.0.0.2, port=4420 00:24:09.390 [2024-07-23 14:06:00.322324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1978710 is same with the state(5) to be set 00:24:09.390 [2024-07-23 14:06:00.322598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.323075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.323087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1994430 with addr=10.0.0.2, port=4420 00:24:09.390 [2024-07-23 14:06:00.323094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1994430 is same with the state(5) to be set 00:24:09.390 [2024-07-23 14:06:00.323416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.323739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.323750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1999f70 with addr=10.0.0.2, port=4420 00:24:09.390 [2024-07-23 14:06:00.323758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1999f70 is same with the state(5) to be set 00:24:09.390 [2024-07-23 14:06:00.323773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0c640 (9): Bad file descriptor 00:24:09.390 [2024-07-23 14:06:00.323785] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199c5e0 (9): Bad file descriptor 00:24:09.390 [2024-07-23 14:06:00.323794] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:09.390 [2024-07-23 14:06:00.323800] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:09.390 [2024-07-23 14:06:00.323809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:09.390 [2024-07-23 14:06:00.323830] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.323842] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.324790] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:09.390 [2024-07-23 14:06:00.324804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:09.390 [2024-07-23 14:06:00.324814] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.390 [2024-07-23 14:06:00.325254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.325606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.325618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3c160 with addr=10.0.0.2, port=4420 00:24:09.390 [2024-07-23 14:06:00.325626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3c160 is same with the state(5) to be set 00:24:09.390 [2024-07-23 14:06:00.325636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1978710 (9): Bad file descriptor 00:24:09.390 [2024-07-23 14:06:00.325645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1994430 (9): Bad file descriptor 00:24:09.390 [2024-07-23 14:06:00.325655] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1999f70 (9): Bad file descriptor 00:24:09.390 [2024-07-23 14:06:00.325663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:09.390 [2024-07-23 14:06:00.325669] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:09.390 [2024-07-23 14:06:00.325677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:09.390 [2024-07-23 14:06:00.325688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:09.390 [2024-07-23 14:06:00.325695] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:09.390 [2024-07-23 14:06:00.325702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:09.390 [2024-07-23 14:06:00.325737] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.325749] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.325759] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.325773] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.325783] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:09.390 [2024-07-23 14:06:00.325846] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.390 [2024-07-23 14:06:00.325854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.390 [2024-07-23 14:06:00.326203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.390 [2024-07-23 14:06:00.326505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.326517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x197b470 with addr=10.0.0.2, port=4420 00:24:09.391 [2024-07-23 14:06:00.326526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x197b470 is same with the state(5) to be set 00:24:09.391 [2024-07-23 14:06:00.326871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.327140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.327152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19936b0 with addr=10.0.0.2, port=4420 00:24:09.391 [2024-07-23 14:06:00.327160] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19936b0 is same with the state(5) to be set 00:24:09.391 [2024-07-23 14:06:00.327170] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c160 (9): Bad file descriptor 00:24:09.391 [2024-07-23 14:06:00.327178] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.327185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.327192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:09.391 [2024-07-23 14:06:00.327202] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.327211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.327219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:09.391 [2024-07-23 14:06:00.327229] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.327236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.327242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:09.391 [2024-07-23 14:06:00.327297] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:09.391 [2024-07-23 14:06:00.327308] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:09.391 [2024-07-23 14:06:00.327317] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.327323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.327329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.327346] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x197b470 (9): Bad file descriptor 00:24:09.391 [2024-07-23 14:06:00.327356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19936b0 (9): Bad file descriptor 00:24:09.391 [2024-07-23 14:06:00.327364] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.327370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.327380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:09.391 [2024-07-23 14:06:00.327405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.327558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.327847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.327859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18dc660 with addr=10.0.0.2, port=4420 00:24:09.391 [2024-07-23 14:06:00.327866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18dc660 is same with the state(5) to be set 00:24:09.391 [2024-07-23 14:06:00.328218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.328503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.391 [2024-07-23 14:06:00.328515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a3c8e0 with addr=10.0.0.2, port=4420 00:24:09.391 [2024-07-23 14:06:00.328522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a3c8e0 is same with the state(5) to be set 00:24:09.391 [2024-07-23 14:06:00.328529] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.328536] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.328542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:09.391 [2024-07-23 14:06:00.328551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.328558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.328564] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:09.391 [2024-07-23 14:06:00.328590] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.328598] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.328606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18dc660 (9): Bad file descriptor 00:24:09.391 [2024-07-23 14:06:00.328615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a3c8e0 (9): Bad file descriptor 00:24:09.391 [2024-07-23 14:06:00.328637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.328645] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.328652] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:09.391 [2024-07-23 14:06:00.328660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:09.391 [2024-07-23 14:06:00.328666] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:09.391 [2024-07-23 14:06:00.328674] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:09.391 [2024-07-23 14:06:00.328696] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.391 [2024-07-23 14:06:00.328703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:09.963 14:06:00 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:09.963 14:06:00 -- target/shutdown.sh@138 -- # sleep 1 00:24:10.905 14:06:01 -- target/shutdown.sh@141 -- # kill -9 3353892 00:24:10.905 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (3353892) - No such process 00:24:10.905 14:06:01 -- target/shutdown.sh@141 -- # true 00:24:10.905 14:06:01 -- target/shutdown.sh@143 -- # stoptarget 00:24:10.905 14:06:01 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:10.905 14:06:01 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:10.905 14:06:01 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.905 14:06:01 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:10.905 14:06:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:10.905 14:06:01 -- nvmf/common.sh@116 -- # sync 00:24:10.905 14:06:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:10.905 14:06:01 -- nvmf/common.sh@119 -- # set +e 00:24:10.905 14:06:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:10.905 14:06:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:10.905 rmmod nvme_tcp 00:24:10.905 rmmod nvme_fabrics 00:24:10.905 rmmod nvme_keyring 00:24:10.905 14:06:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:10.905 14:06:01 -- nvmf/common.sh@123 -- # set -e 00:24:10.905 14:06:01 -- nvmf/common.sh@124 -- # return 0 00:24:10.905 14:06:01 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:10.905 14:06:01 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:10.905 14:06:01 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:10.905 14:06:01 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:10.905 14:06:01 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:10.905 14:06:01 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:10.905 14:06:01 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.905 14:06:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.905 14:06:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.817 14:06:03 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:12.817 00:24:12.817 real 0m7.639s 00:24:12.817 user 0m18.649s 00:24:12.817 sys 0m1.238s 00:24:12.817 14:06:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.817 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:24:12.817 ************************************ 00:24:12.817 END TEST nvmf_shutdown_tc3 00:24:12.817 ************************************ 00:24:13.077 14:06:03 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:13.077 00:24:13.077 real 0m31.235s 00:24:13.077 user 1m20.949s 00:24:13.077 sys 0m8.120s 00:24:13.077 14:06:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:13.077 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:24:13.077 ************************************ 00:24:13.077 END TEST nvmf_shutdown 00:24:13.077 ************************************ 00:24:13.077 14:06:03 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:13.077 14:06:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:13.078 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:24:13.078 14:06:03 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:13.078 14:06:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:13.078 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:24:13.078 14:06:03 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:13.078 14:06:03 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:13.078 14:06:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:13.078 14:06:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:13.078 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:24:13.078 ************************************ 00:24:13.078 START TEST nvmf_multicontroller 00:24:13.078 ************************************ 00:24:13.078 14:06:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:13.078 * Looking for test storage... 00:24:13.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:13.078 14:06:04 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:13.078 14:06:04 -- nvmf/common.sh@7 -- # uname -s 00:24:13.078 14:06:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:13.078 14:06:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:13.078 14:06:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:13.078 14:06:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:13.078 14:06:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:13.078 14:06:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:13.078 14:06:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:13.078 14:06:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:13.078 14:06:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:13.078 14:06:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:13.078 14:06:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.078 14:06:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:13.078 14:06:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:13.078 14:06:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:13.078 14:06:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:13.078 14:06:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:13.078 14:06:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:13.078 14:06:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:13.078 14:06:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:13.078 14:06:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.078 14:06:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.078 14:06:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.078 14:06:04 -- paths/export.sh@5 -- # export PATH 00:24:13.078 14:06:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:13.078 14:06:04 -- nvmf/common.sh@46 -- # : 0 00:24:13.078 14:06:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:13.078 14:06:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:13.078 14:06:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:13.078 14:06:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:13.078 14:06:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:13.078 14:06:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:13.078 14:06:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:13.078 14:06:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:13.078 14:06:04 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:13.078 14:06:04 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:13.078 14:06:04 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:13.078 14:06:04 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:13.078 14:06:04 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:13.078 14:06:04 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:13.078 14:06:04 -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:13.078 14:06:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:13.078 14:06:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.078 14:06:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:13.078 14:06:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:13.078 14:06:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:13.078 14:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:13.078 14:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:13.078 14:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.078 14:06:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:13.078 14:06:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:13.078 14:06:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:13.078 14:06:04 -- common/autotest_common.sh@10 -- # set +x 00:24:19.657 14:06:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:19.657 14:06:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:19.657 14:06:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:19.657 14:06:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:19.657 14:06:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:19.657 14:06:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:19.657 14:06:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:19.657 14:06:09 -- nvmf/common.sh@294 -- # net_devs=() 00:24:19.657 14:06:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:19.657 14:06:09 -- nvmf/common.sh@295 -- # e810=() 00:24:19.657 14:06:09 -- nvmf/common.sh@295 -- # local -ga e810 00:24:19.657 14:06:09 -- nvmf/common.sh@296 -- # x722=() 00:24:19.657 14:06:09 -- nvmf/common.sh@296 -- # local -ga x722 00:24:19.657 14:06:09 -- nvmf/common.sh@297 -- # mlx=() 00:24:19.657 14:06:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:19.657 14:06:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.658 14:06:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:19.658 14:06:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:19.658 14:06:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:19.658 14:06:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:19.658 14:06:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:19.658 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:19.658 14:06:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:19.658 14:06:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:19.658 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:19.658 14:06:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:19.658 14:06:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:19.658 14:06:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.658 14:06:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:19.658 14:06:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.658 14:06:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:19.658 Found net devices under 0000:86:00.0: cvl_0_0 00:24:19.658 14:06:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.658 14:06:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:19.658 14:06:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.658 14:06:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:19.658 14:06:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.658 14:06:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:19.658 Found net devices under 0000:86:00.1: cvl_0_1 00:24:19.658 14:06:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.658 14:06:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:19.658 14:06:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:19.658 14:06:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:19.658 14:06:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.658 14:06:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.658 14:06:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.658 14:06:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:19.658 14:06:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.658 14:06:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.658 14:06:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:19.658 14:06:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.658 14:06:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.658 14:06:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:19.658 14:06:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:19.658 14:06:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.658 14:06:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.658 14:06:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.658 14:06:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.658 14:06:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:19.658 14:06:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.658 14:06:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.658 14:06:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.658 14:06:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:19.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:24:19.658 00:24:19.658 --- 10.0.0.2 ping statistics --- 00:24:19.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.658 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:19.658 14:06:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:24:19.658 00:24:19.658 --- 10.0.0.1 ping statistics --- 00:24:19.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.658 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:24:19.658 14:06:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.658 14:06:09 -- nvmf/common.sh@410 -- # return 0 00:24:19.658 14:06:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:19.658 14:06:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.658 14:06:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:19.658 14:06:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.658 14:06:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:19.658 14:06:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:19.658 14:06:09 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:19.658 14:06:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:19.658 14:06:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:19.658 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.658 14:06:09 -- nvmf/common.sh@469 -- # nvmfpid=3358449 00:24:19.658 14:06:09 -- nvmf/common.sh@470 -- # waitforlisten 3358449 00:24:19.658 14:06:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:19.658 14:06:09 -- common/autotest_common.sh@819 -- # '[' -z 3358449 ']' 00:24:19.658 14:06:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.658 14:06:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.658 14:06:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.658 14:06:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.658 14:06:09 -- common/autotest_common.sh@10 -- # set +x 00:24:19.658 [2024-07-23 14:06:09.784180] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:19.658 [2024-07-23 14:06:09.784224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.658 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.658 [2024-07-23 14:06:09.842901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:19.658 [2024-07-23 14:06:09.917570] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:19.658 [2024-07-23 14:06:09.917690] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.658 [2024-07-23 14:06:09.917697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.658 [2024-07-23 14:06:09.917704] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.658 [2024-07-23 14:06:09.917806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.658 [2024-07-23 14:06:09.917891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.658 [2024-07-23 14:06:09.917892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.658 14:06:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:19.658 14:06:10 -- common/autotest_common.sh@852 -- # return 0 00:24:19.658 14:06:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:19.658 14:06:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:19.658 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.658 14:06:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.658 14:06:10 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.658 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.658 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.658 [2024-07-23 14:06:10.630701] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.658 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.658 14:06:10 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:19.658 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.658 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.658 Malloc0 00:24:19.658 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.658 14:06:10 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.658 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.658 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 [2024-07-23 14:06:10.695229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 [2024-07-23 14:06:10.703159] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 Malloc1 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:19.918 14:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:19.918 14:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.918 14:06:10 -- host/multicontroller.sh@44 -- # bdevperf_pid=3358742 00:24:19.918 14:06:10 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:19.918 14:06:10 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.918 14:06:10 -- host/multicontroller.sh@47 -- # waitforlisten 3358742 /var/tmp/bdevperf.sock 00:24:19.918 14:06:10 -- common/autotest_common.sh@819 -- # '[' -z 3358742 ']' 00:24:19.918 14:06:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.918 14:06:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.918 14:06:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.918 14:06:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.918 14:06:10 -- common/autotest_common.sh@10 -- # set +x 00:24:20.855 14:06:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:20.855 14:06:11 -- common/autotest_common.sh@852 -- # return 0 00:24:20.855 14:06:11 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:20.855 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.855 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:20.855 NVMe0n1 00:24:20.855 14:06:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.855 14:06:11 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.855 14:06:11 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:20.855 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:20.855 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:20.855 14:06:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:20.855 1 00:24:20.855 14:06:11 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.855 14:06:11 -- common/autotest_common.sh@640 -- # local es=0 00:24:20.856 14:06:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:20.856 14:06:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:20.856 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:20.856 14:06:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:21.115 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.115 14:06:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:21.115 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.115 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:21.115 request: 00:24:21.115 { 00:24:21.115 "name": "NVMe0", 00:24:21.115 "trtype": "tcp", 00:24:21.115 "traddr": "10.0.0.2", 00:24:21.115 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:21.115 "hostaddr": "10.0.0.2", 00:24:21.115 "hostsvcid": "60000", 00:24:21.115 "adrfam": "ipv4", 00:24:21.115 "trsvcid": "4420", 00:24:21.115 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.115 "method": "bdev_nvme_attach_controller", 00:24:21.115 "req_id": 1 00:24:21.115 } 00:24:21.115 Got JSON-RPC error response 00:24:21.115 response: 00:24:21.115 { 00:24:21.115 "code": -114, 00:24:21.115 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:21.115 } 00:24:21.115 14:06:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:21.115 14:06:11 -- common/autotest_common.sh@643 -- # es=1 00:24:21.115 14:06:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:21.115 14:06:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:21.115 14:06:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:21.115 14:06:11 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:21.115 14:06:11 -- common/autotest_common.sh@640 -- # local es=0 00:24:21.115 14:06:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:21.115 14:06:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:21.115 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.115 14:06:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:21.115 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.115 14:06:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:21.115 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.115 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:21.115 request: 00:24:21.115 { 00:24:21.115 "name": "NVMe0", 00:24:21.115 "trtype": "tcp", 00:24:21.115 "traddr": "10.0.0.2", 00:24:21.115 "hostaddr": "10.0.0.2", 00:24:21.115 "hostsvcid": "60000", 00:24:21.115 "adrfam": "ipv4", 00:24:21.115 "trsvcid": "4420", 00:24:21.115 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:21.115 "method": "bdev_nvme_attach_controller", 00:24:21.115 "req_id": 1 00:24:21.115 } 00:24:21.115 Got JSON-RPC error response 00:24:21.115 response: 00:24:21.115 { 00:24:21.115 "code": -114, 00:24:21.115 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:21.115 } 00:24:21.115 14:06:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:21.115 14:06:11 -- common/autotest_common.sh@643 -- # es=1 00:24:21.115 14:06:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:21.115 14:06:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:21.115 14:06:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:21.116 14:06:11 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:21.116 14:06:11 -- common/autotest_common.sh@640 -- # local es=0 00:24:21.116 14:06:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:21.116 14:06:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:21.116 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.116 14:06:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:21.116 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.116 14:06:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:21.116 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.116 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:21.116 request: 00:24:21.116 { 00:24:21.116 "name": "NVMe0", 00:24:21.116 "trtype": "tcp", 00:24:21.116 "traddr": "10.0.0.2", 00:24:21.116 "hostaddr": "10.0.0.2", 00:24:21.116 "hostsvcid": "60000", 00:24:21.116 "adrfam": "ipv4", 00:24:21.116 "trsvcid": "4420", 00:24:21.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.116 "multipath": "disable", 00:24:21.116 "method": "bdev_nvme_attach_controller", 00:24:21.116 "req_id": 1 00:24:21.116 } 00:24:21.116 Got JSON-RPC error response 00:24:21.116 response: 00:24:21.116 { 00:24:21.116 "code": -114, 00:24:21.116 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:21.116 } 00:24:21.116 14:06:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:21.116 14:06:11 -- common/autotest_common.sh@643 -- # es=1 00:24:21.116 14:06:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:21.116 14:06:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:21.116 14:06:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:21.116 14:06:11 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:21.116 14:06:11 -- common/autotest_common.sh@640 -- # local es=0 00:24:21.116 14:06:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:21.116 14:06:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:24:21.116 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.116 14:06:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:24:21.116 14:06:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:24:21.116 14:06:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:21.116 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.116 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:21.116 request: 00:24:21.116 { 00:24:21.116 "name": "NVMe0", 00:24:21.116 "trtype": "tcp", 00:24:21.116 "traddr": "10.0.0.2", 00:24:21.116 "hostaddr": "10.0.0.2", 00:24:21.116 "hostsvcid": "60000", 00:24:21.116 "adrfam": "ipv4", 00:24:21.116 "trsvcid": "4420", 00:24:21.116 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:21.116 "multipath": "failover", 00:24:21.116 "method": "bdev_nvme_attach_controller", 00:24:21.116 "req_id": 1 00:24:21.116 } 00:24:21.116 Got JSON-RPC error response 00:24:21.116 response: 00:24:21.116 { 00:24:21.116 "code": -114, 00:24:21.116 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:21.116 } 00:24:21.116 14:06:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:24:21.116 14:06:11 -- common/autotest_common.sh@643 -- # es=1 00:24:21.116 14:06:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:24:21.116 14:06:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:24:21.116 14:06:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:24:21.116 14:06:11 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.116 14:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.116 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:24:21.375 00:24:21.375 14:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.375 14:06:12 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.375 14:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.375 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:24:21.375 14:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.375 14:06:12 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:21.375 14:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.375 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:24:21.375 00:24:21.375 14:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.375 14:06:12 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:21.375 14:06:12 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:21.375 14:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.375 14:06:12 -- common/autotest_common.sh@10 -- # set +x 00:24:21.375 14:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.375 14:06:12 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:21.375 14:06:12 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:22.753 0 00:24:22.753 14:06:13 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:22.753 14:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.753 14:06:13 -- common/autotest_common.sh@10 -- # set +x 00:24:22.753 14:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.753 14:06:13 -- host/multicontroller.sh@100 -- # killprocess 3358742 00:24:22.753 14:06:13 -- common/autotest_common.sh@926 -- # '[' -z 3358742 ']' 00:24:22.753 14:06:13 -- common/autotest_common.sh@930 -- # kill -0 3358742 00:24:22.753 14:06:13 -- common/autotest_common.sh@931 -- # uname 00:24:22.753 14:06:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:22.753 14:06:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3358742 00:24:22.753 14:06:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:22.753 14:06:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:22.753 14:06:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3358742' 00:24:22.753 killing process with pid 3358742 00:24:22.753 14:06:13 -- common/autotest_common.sh@945 -- # kill 3358742 00:24:22.753 14:06:13 -- common/autotest_common.sh@950 -- # wait 3358742 00:24:22.753 14:06:13 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.753 14:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.753 14:06:13 -- common/autotest_common.sh@10 -- # set +x 00:24:22.753 14:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.753 14:06:13 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:22.753 14:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:22.753 14:06:13 -- common/autotest_common.sh@10 -- # set +x 00:24:22.753 14:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:22.753 14:06:13 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:22.753 14:06:13 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.753 14:06:13 -- common/autotest_common.sh@1597 -- # read -r file 00:24:22.753 14:06:13 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:22.753 14:06:13 -- common/autotest_common.sh@1596 -- # sort -u 00:24:22.753 14:06:13 -- common/autotest_common.sh@1598 -- # cat 00:24:22.753 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:22.753 [2024-07-23 14:06:10.804587] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:22.753 [2024-07-23 14:06:10.804633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3358742 ] 00:24:22.753 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.753 [2024-07-23 14:06:10.858732] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.753 [2024-07-23 14:06:10.937751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.753 [2024-07-23 14:06:12.300974] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 6ebf517d-ed4c-492f-b9af-d920de36df3f already exists 00:24:22.753 [2024-07-23 14:06:12.301003] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:6ebf517d-ed4c-492f-b9af-d920de36df3f alias for bdev NVMe1n1 00:24:22.753 [2024-07-23 14:06:12.301012] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:22.753 Running I/O for 1 seconds... 00:24:22.753 00:24:22.753 Latency(us) 00:24:22.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.753 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:22.753 NVMe0n1 : 1.01 23501.41 91.80 0.00 0.00 5429.63 4046.14 25302.59 00:24:22.753 =================================================================================================================== 00:24:22.753 Total : 23501.41 91.80 0.00 0.00 5429.63 4046.14 25302.59 00:24:22.753 Received shutdown signal, test time was about 1.000000 seconds 00:24:22.753 00:24:22.753 Latency(us) 00:24:22.753 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:22.753 =================================================================================================================== 00:24:22.753 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:22.753 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:22.753 14:06:13 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:22.753 14:06:13 -- common/autotest_common.sh@1597 -- # read -r file 00:24:22.753 14:06:13 -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:22.753 14:06:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:22.753 14:06:13 -- nvmf/common.sh@116 -- # sync 00:24:22.753 14:06:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:22.753 14:06:13 -- nvmf/common.sh@119 -- # set +e 00:24:22.753 14:06:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:22.753 14:06:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:22.753 rmmod nvme_tcp 00:24:22.753 rmmod nvme_fabrics 00:24:23.012 rmmod nvme_keyring 00:24:23.012 14:06:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:23.012 14:06:13 -- nvmf/common.sh@123 -- # set -e 00:24:23.012 14:06:13 -- nvmf/common.sh@124 -- # return 0 00:24:23.012 14:06:13 -- nvmf/common.sh@477 -- # '[' -n 3358449 ']' 00:24:23.012 14:06:13 -- nvmf/common.sh@478 -- # killprocess 3358449 00:24:23.012 14:06:13 -- common/autotest_common.sh@926 -- # '[' -z 3358449 ']' 00:24:23.012 14:06:13 -- common/autotest_common.sh@930 -- # kill -0 3358449 00:24:23.012 14:06:13 -- common/autotest_common.sh@931 -- # uname 00:24:23.012 14:06:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:23.012 14:06:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3358449 00:24:23.012 14:06:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:24:23.012 14:06:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:24:23.012 14:06:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3358449' 00:24:23.012 killing process with pid 3358449 00:24:23.012 14:06:13 -- common/autotest_common.sh@945 -- # kill 3358449 00:24:23.012 14:06:13 -- common/autotest_common.sh@950 -- # wait 3358449 00:24:23.271 14:06:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:23.271 14:06:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:23.271 14:06:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:23.271 14:06:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.271 14:06:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:23.271 14:06:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.271 14:06:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.271 14:06:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.178 14:06:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:25.178 00:24:25.178 real 0m12.209s 00:24:25.178 user 0m17.159s 00:24:25.178 sys 0m5.056s 00:24:25.178 14:06:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:25.178 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:24:25.178 ************************************ 00:24:25.178 END TEST nvmf_multicontroller 00:24:25.178 ************************************ 00:24:25.178 14:06:16 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:25.178 14:06:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:25.178 14:06:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:25.178 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:24:25.178 ************************************ 00:24:25.178 START TEST nvmf_aer 00:24:25.178 ************************************ 00:24:25.178 14:06:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:25.437 * Looking for test storage... 00:24:25.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.437 14:06:16 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.437 14:06:16 -- nvmf/common.sh@7 -- # uname -s 00:24:25.437 14:06:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.437 14:06:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.437 14:06:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.437 14:06:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.437 14:06:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.437 14:06:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.437 14:06:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.437 14:06:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.437 14:06:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.437 14:06:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.437 14:06:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.437 14:06:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.437 14:06:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.437 14:06:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.437 14:06:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.437 14:06:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.437 14:06:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.437 14:06:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.437 14:06:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.437 14:06:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.437 14:06:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.437 14:06:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.437 14:06:16 -- paths/export.sh@5 -- # export PATH 00:24:25.437 14:06:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.437 14:06:16 -- nvmf/common.sh@46 -- # : 0 00:24:25.437 14:06:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:25.437 14:06:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:25.437 14:06:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:25.437 14:06:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.437 14:06:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.437 14:06:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:25.438 14:06:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:25.438 14:06:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:25.438 14:06:16 -- host/aer.sh@11 -- # nvmftestinit 00:24:25.438 14:06:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:25.438 14:06:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.438 14:06:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:25.438 14:06:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:25.438 14:06:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:25.438 14:06:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.438 14:06:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.438 14:06:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.438 14:06:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:25.438 14:06:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:25.438 14:06:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:25.438 14:06:16 -- common/autotest_common.sh@10 -- # set +x 00:24:30.756 14:06:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:30.756 14:06:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:30.756 14:06:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:30.756 14:06:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:30.756 14:06:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:30.756 14:06:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:30.756 14:06:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:30.756 14:06:21 -- nvmf/common.sh@294 -- # net_devs=() 00:24:30.756 14:06:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:30.756 14:06:21 -- nvmf/common.sh@295 -- # e810=() 00:24:30.756 14:06:21 -- nvmf/common.sh@295 -- # local -ga e810 00:24:30.756 14:06:21 -- nvmf/common.sh@296 -- # x722=() 00:24:30.756 14:06:21 -- nvmf/common.sh@296 -- # local -ga x722 00:24:30.756 14:06:21 -- nvmf/common.sh@297 -- # mlx=() 00:24:30.756 14:06:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:30.756 14:06:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.756 14:06:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:30.756 14:06:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:30.756 14:06:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:30.756 14:06:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:30.756 14:06:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.756 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.756 14:06:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:30.756 14:06:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.756 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.756 14:06:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:30.756 14:06:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:30.756 14:06:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:30.756 14:06:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.756 14:06:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:30.756 14:06:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.756 14:06:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.756 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.756 14:06:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.756 14:06:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:30.756 14:06:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.756 14:06:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:30.756 14:06:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.756 14:06:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.757 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.757 14:06:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.757 14:06:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:30.757 14:06:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:30.757 14:06:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:30.757 14:06:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:30.757 14:06:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:30.757 14:06:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.757 14:06:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.757 14:06:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.757 14:06:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:30.757 14:06:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.757 14:06:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.757 14:06:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:30.757 14:06:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.757 14:06:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.757 14:06:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:30.757 14:06:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:30.757 14:06:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.757 14:06:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.017 14:06:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.017 14:06:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.017 14:06:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:31.017 14:06:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.017 14:06:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.017 14:06:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.017 14:06:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:31.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:24:31.017 00:24:31.017 --- 10.0.0.2 ping statistics --- 00:24:31.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.017 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:24:31.017 14:06:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:24:31.017 00:24:31.017 --- 10.0.0.1 ping statistics --- 00:24:31.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.017 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:31.017 14:06:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.017 14:06:21 -- nvmf/common.sh@410 -- # return 0 00:24:31.017 14:06:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:31.017 14:06:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.017 14:06:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:31.017 14:06:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:31.017 14:06:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.017 14:06:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:31.017 14:06:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:31.017 14:06:21 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:31.017 14:06:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:31.017 14:06:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:31.017 14:06:21 -- common/autotest_common.sh@10 -- # set +x 00:24:31.017 14:06:21 -- nvmf/common.sh@469 -- # nvmfpid=3362766 00:24:31.017 14:06:21 -- nvmf/common.sh@470 -- # waitforlisten 3362766 00:24:31.017 14:06:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.017 14:06:21 -- common/autotest_common.sh@819 -- # '[' -z 3362766 ']' 00:24:31.017 14:06:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.017 14:06:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:31.017 14:06:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.017 14:06:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:31.017 14:06:21 -- common/autotest_common.sh@10 -- # set +x 00:24:31.017 [2024-07-23 14:06:21.982683] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:31.017 [2024-07-23 14:06:21.982721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.017 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.278 [2024-07-23 14:06:22.040148] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.278 [2024-07-23 14:06:22.111239] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:31.278 [2024-07-23 14:06:22.111351] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.278 [2024-07-23 14:06:22.111359] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.278 [2024-07-23 14:06:22.111366] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.278 [2024-07-23 14:06:22.111466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.278 [2024-07-23 14:06:22.111483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.278 [2024-07-23 14:06:22.111571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.278 [2024-07-23 14:06:22.111571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.848 14:06:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:31.848 14:06:22 -- common/autotest_common.sh@852 -- # return 0 00:24:31.848 14:06:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:31.848 14:06:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:31.848 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:31.848 14:06:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.848 14:06:22 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:31.848 14:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.848 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:31.848 [2024-07-23 14:06:22.814368] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.848 14:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.848 14:06:22 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:31.848 14:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.848 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:31.848 Malloc0 00:24:31.848 14:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.848 14:06:22 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:31.848 14:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.848 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:31.848 14:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.848 14:06:22 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:31.848 14:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.848 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:31.848 14:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:31.848 14:06:22 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.848 14:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:31.848 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:32.108 [2024-07-23 14:06:22.866324] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:32.108 14:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.108 14:06:22 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:32.108 14:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.108 14:06:22 -- common/autotest_common.sh@10 -- # set +x 00:24:32.108 [2024-07-23 14:06:22.874119] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:32.108 [ 00:24:32.108 { 00:24:32.108 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:32.108 "subtype": "Discovery", 00:24:32.108 "listen_addresses": [], 00:24:32.108 "allow_any_host": true, 00:24:32.108 "hosts": [] 00:24:32.108 }, 00:24:32.108 { 00:24:32.108 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.108 "subtype": "NVMe", 00:24:32.108 "listen_addresses": [ 00:24:32.108 { 00:24:32.108 "transport": "TCP", 00:24:32.108 "trtype": "TCP", 00:24:32.108 "adrfam": "IPv4", 00:24:32.108 "traddr": "10.0.0.2", 00:24:32.108 "trsvcid": "4420" 00:24:32.108 } 00:24:32.108 ], 00:24:32.109 "allow_any_host": true, 00:24:32.109 "hosts": [], 00:24:32.109 "serial_number": "SPDK00000000000001", 00:24:32.109 "model_number": "SPDK bdev Controller", 00:24:32.109 "max_namespaces": 2, 00:24:32.109 "min_cntlid": 1, 00:24:32.109 "max_cntlid": 65519, 00:24:32.109 "namespaces": [ 00:24:32.109 { 00:24:32.109 "nsid": 1, 00:24:32.109 "bdev_name": "Malloc0", 00:24:32.109 "name": "Malloc0", 00:24:32.109 "nguid": "56A18F22A78A44709FA33A2CAEA740A1", 00:24:32.109 "uuid": "56a18f22-a78a-4470-9fa3-3a2caea740a1" 00:24:32.109 } 00:24:32.109 ] 00:24:32.109 } 00:24:32.109 ] 00:24:32.109 14:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.109 14:06:22 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:32.109 14:06:22 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:32.109 14:06:22 -- host/aer.sh@33 -- # aerpid=3362802 00:24:32.109 14:06:22 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:32.109 14:06:22 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:32.109 14:06:22 -- common/autotest_common.sh@1244 -- # local i=0 00:24:32.109 14:06:22 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.109 14:06:22 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:24:32.109 14:06:22 -- common/autotest_common.sh@1247 -- # i=1 00:24:32.109 14:06:22 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:32.109 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.109 14:06:22 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.109 14:06:22 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:24:32.109 14:06:22 -- common/autotest_common.sh@1247 -- # i=2 00:24:32.109 14:06:22 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:24:32.109 14:06:23 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.109 14:06:23 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:32.109 14:06:23 -- common/autotest_common.sh@1255 -- # return 0 00:24:32.109 14:06:23 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:32.109 14:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.109 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 Malloc1 00:24:32.369 14:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.369 14:06:23 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:32.369 14:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.369 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 14:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.369 14:06:23 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:32.369 14:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.369 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 [ 00:24:32.369 { 00:24:32.369 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:32.369 "subtype": "Discovery", 00:24:32.369 "listen_addresses": [], 00:24:32.369 "allow_any_host": true, 00:24:32.369 "hosts": [] 00:24:32.369 }, 00:24:32.369 { 00:24:32.369 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:32.369 "subtype": "NVMe", 00:24:32.369 "listen_addresses": [ 00:24:32.369 { 00:24:32.369 "transport": "TCP", 00:24:32.369 "trtype": "TCP", 00:24:32.369 "adrfam": "IPv4", 00:24:32.369 "traddr": "10.0.0.2", 00:24:32.369 "trsvcid": "4420" 00:24:32.369 } 00:24:32.369 ], 00:24:32.369 "allow_any_host": true, 00:24:32.369 "hosts": [], 00:24:32.369 "serial_number": "SPDK00000000000001", 00:24:32.369 "model_number": "SPDK bdev Controller", 00:24:32.369 "max_namespaces": 2, 00:24:32.369 "min_cntlid": 1, 00:24:32.369 "max_cntlid": 65519, 00:24:32.369 "namespaces": [ 00:24:32.369 { 00:24:32.369 "nsid": 1, 00:24:32.369 "bdev_name": "Malloc0", 00:24:32.369 "name": "Malloc0", 00:24:32.369 "nguid": "56A18F22A78A44709FA33A2CAEA740A1", 00:24:32.369 "uuid": "56a18f22-a78a-4470-9fa3-3a2caea740a1" 00:24:32.369 }, 00:24:32.369 { 00:24:32.369 "nsid": 2, 00:24:32.369 "bdev_name": "Malloc1", 00:24:32.369 Asynchronous Event Request test 00:24:32.369 Attaching to 10.0.0.2 00:24:32.369 Attached to 10.0.0.2 00:24:32.369 Registering asynchronous event callbacks... 00:24:32.369 Starting namespace attribute notice tests for all controllers... 00:24:32.369 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:32.369 aer_cb - Changed Namespace 00:24:32.369 Cleaning up... 00:24:32.369 "name": "Malloc1", 00:24:32.369 "nguid": "3D96873646F44235A2EF06E4C0278C45", 00:24:32.369 "uuid": "3d968736-46f4-4235-a2ef-06e4c0278c45" 00:24:32.369 } 00:24:32.369 ] 00:24:32.369 } 00:24:32.369 ] 00:24:32.369 14:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.369 14:06:23 -- host/aer.sh@43 -- # wait 3362802 00:24:32.369 14:06:23 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:32.369 14:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.369 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 14:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.369 14:06:23 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:32.369 14:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.369 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 14:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.369 14:06:23 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:32.369 14:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:32.369 14:06:23 -- common/autotest_common.sh@10 -- # set +x 00:24:32.369 14:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:32.369 14:06:23 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:32.369 14:06:23 -- host/aer.sh@51 -- # nvmftestfini 00:24:32.369 14:06:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:32.369 14:06:23 -- nvmf/common.sh@116 -- # sync 00:24:32.369 14:06:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:32.369 14:06:23 -- nvmf/common.sh@119 -- # set +e 00:24:32.369 14:06:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:32.369 14:06:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:32.369 rmmod nvme_tcp 00:24:32.369 rmmod nvme_fabrics 00:24:32.369 rmmod nvme_keyring 00:24:32.369 14:06:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:32.369 14:06:23 -- nvmf/common.sh@123 -- # set -e 00:24:32.369 14:06:23 -- nvmf/common.sh@124 -- # return 0 00:24:32.369 14:06:23 -- nvmf/common.sh@477 -- # '[' -n 3362766 ']' 00:24:32.369 14:06:23 -- nvmf/common.sh@478 -- # killprocess 3362766 00:24:32.369 14:06:23 -- common/autotest_common.sh@926 -- # '[' -z 3362766 ']' 00:24:32.369 14:06:23 -- common/autotest_common.sh@930 -- # kill -0 3362766 00:24:32.369 14:06:23 -- common/autotest_common.sh@931 -- # uname 00:24:32.369 14:06:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:32.369 14:06:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3362766 00:24:32.369 14:06:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:32.369 14:06:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:32.369 14:06:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3362766' 00:24:32.369 killing process with pid 3362766 00:24:32.369 14:06:23 -- common/autotest_common.sh@945 -- # kill 3362766 00:24:32.369 [2024-07-23 14:06:23.328699] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:32.369 14:06:23 -- common/autotest_common.sh@950 -- # wait 3362766 00:24:32.629 14:06:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:32.629 14:06:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:32.629 14:06:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:32.629 14:06:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:32.629 14:06:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:32.629 14:06:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.629 14:06:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:32.629 14:06:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.169 14:06:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:35.169 00:24:35.169 real 0m9.406s 00:24:35.169 user 0m7.212s 00:24:35.169 sys 0m4.635s 00:24:35.169 14:06:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:35.169 14:06:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.169 ************************************ 00:24:35.169 END TEST nvmf_aer 00:24:35.169 ************************************ 00:24:35.169 14:06:25 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:35.169 14:06:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:35.169 14:06:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:35.169 14:06:25 -- common/autotest_common.sh@10 -- # set +x 00:24:35.169 ************************************ 00:24:35.169 START TEST nvmf_async_init 00:24:35.169 ************************************ 00:24:35.169 14:06:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:35.169 * Looking for test storage... 00:24:35.169 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:35.169 14:06:25 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:35.169 14:06:25 -- nvmf/common.sh@7 -- # uname -s 00:24:35.169 14:06:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.169 14:06:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.169 14:06:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.169 14:06:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.169 14:06:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.169 14:06:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.169 14:06:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.169 14:06:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.169 14:06:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.169 14:06:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.169 14:06:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.169 14:06:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:35.169 14:06:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.169 14:06:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.170 14:06:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:35.170 14:06:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:35.170 14:06:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.170 14:06:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.170 14:06:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.170 14:06:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.170 14:06:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.170 14:06:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.170 14:06:25 -- paths/export.sh@5 -- # export PATH 00:24:35.170 14:06:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.170 14:06:25 -- nvmf/common.sh@46 -- # : 0 00:24:35.170 14:06:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:35.170 14:06:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:35.170 14:06:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:35.170 14:06:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.170 14:06:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.170 14:06:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:35.170 14:06:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:35.170 14:06:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:35.170 14:06:25 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:35.170 14:06:25 -- host/async_init.sh@14 -- # null_block_size=512 00:24:35.170 14:06:25 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:35.170 14:06:25 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:35.170 14:06:25 -- host/async_init.sh@20 -- # uuidgen 00:24:35.170 14:06:25 -- host/async_init.sh@20 -- # tr -d - 00:24:35.170 14:06:25 -- host/async_init.sh@20 -- # nguid=b047bd09b83e4bc2a51707176e58682e 00:24:35.170 14:06:25 -- host/async_init.sh@22 -- # nvmftestinit 00:24:35.170 14:06:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:35.170 14:06:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.170 14:06:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:35.170 14:06:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:35.170 14:06:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:35.170 14:06:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.170 14:06:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:35.170 14:06:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.170 14:06:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:35.170 14:06:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:35.170 14:06:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:35.170 14:06:25 -- common/autotest_common.sh@10 -- # set +x 00:24:40.454 14:06:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.454 14:06:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.454 14:06:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.454 14:06:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.454 14:06:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.454 14:06:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.454 14:06:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.454 14:06:30 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.454 14:06:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.454 14:06:30 -- nvmf/common.sh@295 -- # e810=() 00:24:40.454 14:06:30 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.454 14:06:30 -- nvmf/common.sh@296 -- # x722=() 00:24:40.454 14:06:30 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.454 14:06:30 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.454 14:06:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.454 14:06:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.454 14:06:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.454 14:06:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:40.454 14:06:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.454 14:06:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.454 14:06:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:40.454 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:40.454 14:06:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.454 14:06:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:40.454 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:40.454 14:06:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.454 14:06:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:40.454 14:06:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.454 14:06:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.454 14:06:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.454 14:06:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.454 14:06:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:40.454 Found net devices under 0000:86:00.0: cvl_0_0 00:24:40.454 14:06:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.454 14:06:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.454 14:06:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.454 14:06:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.454 14:06:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.454 14:06:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:40.454 Found net devices under 0000:86:00.1: cvl_0_1 00:24:40.455 14:06:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.455 14:06:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.455 14:06:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.455 14:06:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.455 14:06:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:40.455 14:06:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:40.455 14:06:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.455 14:06:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.455 14:06:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.455 14:06:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:40.455 14:06:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.455 14:06:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.455 14:06:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:40.455 14:06:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.455 14:06:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.455 14:06:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:40.455 14:06:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:40.455 14:06:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.455 14:06:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.455 14:06:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.455 14:06:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.455 14:06:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:40.455 14:06:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.455 14:06:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.455 14:06:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.455 14:06:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:40.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:24:40.455 00:24:40.455 --- 10.0.0.2 ping statistics --- 00:24:40.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.455 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:24:40.455 14:06:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:24:40.455 00:24:40.455 --- 10.0.0.1 ping statistics --- 00:24:40.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.455 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:24:40.455 14:06:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.455 14:06:31 -- nvmf/common.sh@410 -- # return 0 00:24:40.455 14:06:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.455 14:06:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.455 14:06:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:40.455 14:06:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:40.455 14:06:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.455 14:06:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:40.455 14:06:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:40.455 14:06:31 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:40.455 14:06:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:40.455 14:06:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:40.455 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:24:40.455 14:06:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:40.455 14:06:31 -- nvmf/common.sh@469 -- # nvmfpid=3366336 00:24:40.455 14:06:31 -- nvmf/common.sh@470 -- # waitforlisten 3366336 00:24:40.455 14:06:31 -- common/autotest_common.sh@819 -- # '[' -z 3366336 ']' 00:24:40.455 14:06:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.455 14:06:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:40.455 14:06:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.455 14:06:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:40.455 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:24:40.455 [2024-07-23 14:06:31.138644] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:40.455 [2024-07-23 14:06:31.138683] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.455 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.455 [2024-07-23 14:06:31.194374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.455 [2024-07-23 14:06:31.271479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:40.455 [2024-07-23 14:06:31.271587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.455 [2024-07-23 14:06:31.271595] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.455 [2024-07-23 14:06:31.271601] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.455 [2024-07-23 14:06:31.271617] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.025 14:06:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:41.025 14:06:31 -- common/autotest_common.sh@852 -- # return 0 00:24:41.025 14:06:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:41.025 14:06:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:41.025 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 14:06:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:41.025 14:06:31 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:41.025 14:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 [2024-07-23 14:06:31.989875] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.025 14:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.025 14:06:31 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:41.025 14:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:31 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 null0 00:24:41.025 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.025 14:06:32 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:41.025 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.025 14:06:32 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:41.025 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.025 14:06:32 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b047bd09b83e4bc2a51707176e58682e 00:24:41.025 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.025 14:06:32 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:41.025 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.025 [2024-07-23 14:06:32.030076] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.025 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.025 14:06:32 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:41.025 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.025 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.285 nvme0n1 00:24:41.285 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.285 14:06:32 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.285 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.285 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.285 [ 00:24:41.285 { 00:24:41.285 "name": "nvme0n1", 00:24:41.285 "aliases": [ 00:24:41.285 "b047bd09-b83e-4bc2-a517-07176e58682e" 00:24:41.285 ], 00:24:41.285 "product_name": "NVMe disk", 00:24:41.285 "block_size": 512, 00:24:41.285 "num_blocks": 2097152, 00:24:41.285 "uuid": "b047bd09-b83e-4bc2-a517-07176e58682e", 00:24:41.285 "assigned_rate_limits": { 00:24:41.285 "rw_ios_per_sec": 0, 00:24:41.285 "rw_mbytes_per_sec": 0, 00:24:41.285 "r_mbytes_per_sec": 0, 00:24:41.285 "w_mbytes_per_sec": 0 00:24:41.285 }, 00:24:41.285 "claimed": false, 00:24:41.285 "zoned": false, 00:24:41.285 "supported_io_types": { 00:24:41.285 "read": true, 00:24:41.285 "write": true, 00:24:41.285 "unmap": false, 00:24:41.285 "write_zeroes": true, 00:24:41.285 "flush": true, 00:24:41.285 "reset": true, 00:24:41.285 "compare": true, 00:24:41.285 "compare_and_write": true, 00:24:41.285 "abort": true, 00:24:41.285 "nvme_admin": true, 00:24:41.285 "nvme_io": true 00:24:41.285 }, 00:24:41.285 "driver_specific": { 00:24:41.285 "nvme": [ 00:24:41.285 { 00:24:41.285 "trid": { 00:24:41.285 "trtype": "TCP", 00:24:41.285 "adrfam": "IPv4", 00:24:41.285 "traddr": "10.0.0.2", 00:24:41.285 "trsvcid": "4420", 00:24:41.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:41.285 }, 00:24:41.285 "ctrlr_data": { 00:24:41.285 "cntlid": 1, 00:24:41.285 "vendor_id": "0x8086", 00:24:41.285 "model_number": "SPDK bdev Controller", 00:24:41.285 "serial_number": "00000000000000000000", 00:24:41.285 "firmware_revision": "24.01.1", 00:24:41.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.285 "oacs": { 00:24:41.285 "security": 0, 00:24:41.285 "format": 0, 00:24:41.285 "firmware": 0, 00:24:41.285 "ns_manage": 0 00:24:41.285 }, 00:24:41.285 "multi_ctrlr": true, 00:24:41.285 "ana_reporting": false 00:24:41.285 }, 00:24:41.285 "vs": { 00:24:41.285 "nvme_version": "1.3" 00:24:41.285 }, 00:24:41.285 "ns_data": { 00:24:41.285 "id": 1, 00:24:41.285 "can_share": true 00:24:41.285 } 00:24:41.285 } 00:24:41.285 ], 00:24:41.285 "mp_policy": "active_passive" 00:24:41.285 } 00:24:41.285 } 00:24:41.285 ] 00:24:41.285 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.285 14:06:32 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:41.285 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.285 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.285 [2024-07-23 14:06:32.282631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:41.285 [2024-07-23 14:06:32.282685] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21262b0 (9): Bad file descriptor 00:24:41.545 [2024-07-23 14:06:32.414116] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.545 [ 00:24:41.545 { 00:24:41.545 "name": "nvme0n1", 00:24:41.545 "aliases": [ 00:24:41.545 "b047bd09-b83e-4bc2-a517-07176e58682e" 00:24:41.545 ], 00:24:41.545 "product_name": "NVMe disk", 00:24:41.545 "block_size": 512, 00:24:41.545 "num_blocks": 2097152, 00:24:41.545 "uuid": "b047bd09-b83e-4bc2-a517-07176e58682e", 00:24:41.545 "assigned_rate_limits": { 00:24:41.545 "rw_ios_per_sec": 0, 00:24:41.545 "rw_mbytes_per_sec": 0, 00:24:41.545 "r_mbytes_per_sec": 0, 00:24:41.545 "w_mbytes_per_sec": 0 00:24:41.545 }, 00:24:41.545 "claimed": false, 00:24:41.545 "zoned": false, 00:24:41.545 "supported_io_types": { 00:24:41.545 "read": true, 00:24:41.545 "write": true, 00:24:41.545 "unmap": false, 00:24:41.545 "write_zeroes": true, 00:24:41.545 "flush": true, 00:24:41.545 "reset": true, 00:24:41.545 "compare": true, 00:24:41.545 "compare_and_write": true, 00:24:41.545 "abort": true, 00:24:41.545 "nvme_admin": true, 00:24:41.545 "nvme_io": true 00:24:41.545 }, 00:24:41.545 "driver_specific": { 00:24:41.545 "nvme": [ 00:24:41.545 { 00:24:41.545 "trid": { 00:24:41.545 "trtype": "TCP", 00:24:41.545 "adrfam": "IPv4", 00:24:41.545 "traddr": "10.0.0.2", 00:24:41.545 "trsvcid": "4420", 00:24:41.545 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:41.545 }, 00:24:41.545 "ctrlr_data": { 00:24:41.545 "cntlid": 2, 00:24:41.545 "vendor_id": "0x8086", 00:24:41.545 "model_number": "SPDK bdev Controller", 00:24:41.545 "serial_number": "00000000000000000000", 00:24:41.545 "firmware_revision": "24.01.1", 00:24:41.545 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.545 "oacs": { 00:24:41.545 "security": 0, 00:24:41.545 "format": 0, 00:24:41.545 "firmware": 0, 00:24:41.545 "ns_manage": 0 00:24:41.545 }, 00:24:41.545 "multi_ctrlr": true, 00:24:41.545 "ana_reporting": false 00:24:41.545 }, 00:24:41.545 "vs": { 00:24:41.545 "nvme_version": "1.3" 00:24:41.545 }, 00:24:41.545 "ns_data": { 00:24:41.545 "id": 1, 00:24:41.545 "can_share": true 00:24:41.545 } 00:24:41.545 } 00:24:41.545 ], 00:24:41.545 "mp_policy": "active_passive" 00:24:41.545 } 00:24:41.545 } 00:24:41.545 ] 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@53 -- # mktemp 00:24:41.545 14:06:32 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pArpkDxkhL 00:24:41.545 14:06:32 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:41.545 14:06:32 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pArpkDxkhL 00:24:41.545 14:06:32 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.545 [2024-07-23 14:06:32.471201] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.545 [2024-07-23 14:06:32.471294] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pArpkDxkhL 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pArpkDxkhL 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.545 [2024-07-23 14:06:32.487241] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:41.545 nvme0n1 00:24:41.545 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.545 14:06:32 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:41.545 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.545 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.805 [ 00:24:41.805 { 00:24:41.805 "name": "nvme0n1", 00:24:41.805 "aliases": [ 00:24:41.805 "b047bd09-b83e-4bc2-a517-07176e58682e" 00:24:41.805 ], 00:24:41.805 "product_name": "NVMe disk", 00:24:41.805 "block_size": 512, 00:24:41.805 "num_blocks": 2097152, 00:24:41.805 "uuid": "b047bd09-b83e-4bc2-a517-07176e58682e", 00:24:41.805 "assigned_rate_limits": { 00:24:41.805 "rw_ios_per_sec": 0, 00:24:41.805 "rw_mbytes_per_sec": 0, 00:24:41.805 "r_mbytes_per_sec": 0, 00:24:41.805 "w_mbytes_per_sec": 0 00:24:41.805 }, 00:24:41.805 "claimed": false, 00:24:41.805 "zoned": false, 00:24:41.805 "supported_io_types": { 00:24:41.805 "read": true, 00:24:41.805 "write": true, 00:24:41.805 "unmap": false, 00:24:41.805 "write_zeroes": true, 00:24:41.805 "flush": true, 00:24:41.805 "reset": true, 00:24:41.805 "compare": true, 00:24:41.805 "compare_and_write": true, 00:24:41.805 "abort": true, 00:24:41.805 "nvme_admin": true, 00:24:41.805 "nvme_io": true 00:24:41.805 }, 00:24:41.805 "driver_specific": { 00:24:41.805 "nvme": [ 00:24:41.805 { 00:24:41.805 "trid": { 00:24:41.805 "trtype": "TCP", 00:24:41.805 "adrfam": "IPv4", 00:24:41.805 "traddr": "10.0.0.2", 00:24:41.805 "trsvcid": "4421", 00:24:41.805 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:41.805 }, 00:24:41.805 "ctrlr_data": { 00:24:41.805 "cntlid": 3, 00:24:41.805 "vendor_id": "0x8086", 00:24:41.805 "model_number": "SPDK bdev Controller", 00:24:41.805 "serial_number": "00000000000000000000", 00:24:41.805 "firmware_revision": "24.01.1", 00:24:41.805 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:41.805 "oacs": { 00:24:41.805 "security": 0, 00:24:41.805 "format": 0, 00:24:41.805 "firmware": 0, 00:24:41.805 "ns_manage": 0 00:24:41.805 }, 00:24:41.805 "multi_ctrlr": true, 00:24:41.805 "ana_reporting": false 00:24:41.805 }, 00:24:41.805 "vs": { 00:24:41.805 "nvme_version": "1.3" 00:24:41.805 }, 00:24:41.805 "ns_data": { 00:24:41.806 "id": 1, 00:24:41.806 "can_share": true 00:24:41.806 } 00:24:41.806 } 00:24:41.806 ], 00:24:41.806 "mp_policy": "active_passive" 00:24:41.806 } 00:24:41.806 } 00:24:41.806 ] 00:24:41.806 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.806 14:06:32 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.806 14:06:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:41.806 14:06:32 -- common/autotest_common.sh@10 -- # set +x 00:24:41.806 14:06:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:41.806 14:06:32 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pArpkDxkhL 00:24:41.806 14:06:32 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:41.806 14:06:32 -- host/async_init.sh@78 -- # nvmftestfini 00:24:41.806 14:06:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:41.806 14:06:32 -- nvmf/common.sh@116 -- # sync 00:24:41.806 14:06:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:41.806 14:06:32 -- nvmf/common.sh@119 -- # set +e 00:24:41.806 14:06:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:41.806 14:06:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:41.806 rmmod nvme_tcp 00:24:41.806 rmmod nvme_fabrics 00:24:41.806 rmmod nvme_keyring 00:24:41.806 14:06:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:41.806 14:06:32 -- nvmf/common.sh@123 -- # set -e 00:24:41.806 14:06:32 -- nvmf/common.sh@124 -- # return 0 00:24:41.806 14:06:32 -- nvmf/common.sh@477 -- # '[' -n 3366336 ']' 00:24:41.806 14:06:32 -- nvmf/common.sh@478 -- # killprocess 3366336 00:24:41.806 14:06:32 -- common/autotest_common.sh@926 -- # '[' -z 3366336 ']' 00:24:41.806 14:06:32 -- common/autotest_common.sh@930 -- # kill -0 3366336 00:24:41.806 14:06:32 -- common/autotest_common.sh@931 -- # uname 00:24:41.806 14:06:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:41.806 14:06:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3366336 00:24:41.806 14:06:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:41.806 14:06:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:41.806 14:06:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3366336' 00:24:41.806 killing process with pid 3366336 00:24:41.806 14:06:32 -- common/autotest_common.sh@945 -- # kill 3366336 00:24:41.806 14:06:32 -- common/autotest_common.sh@950 -- # wait 3366336 00:24:42.065 14:06:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:42.065 14:06:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:42.065 14:06:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:42.065 14:06:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.065 14:06:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:42.065 14:06:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.065 14:06:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.065 14:06:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.973 14:06:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:43.973 00:24:43.973 real 0m9.310s 00:24:43.973 user 0m3.524s 00:24:43.973 sys 0m4.318s 00:24:43.973 14:06:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.973 14:06:34 -- common/autotest_common.sh@10 -- # set +x 00:24:43.973 ************************************ 00:24:43.973 END TEST nvmf_async_init 00:24:43.973 ************************************ 00:24:43.973 14:06:34 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:43.973 14:06:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:43.973 14:06:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:43.973 14:06:34 -- common/autotest_common.sh@10 -- # set +x 00:24:43.973 ************************************ 00:24:43.973 START TEST dma 00:24:43.973 ************************************ 00:24:43.973 14:06:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:44.233 * Looking for test storage... 00:24:44.233 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.233 14:06:35 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.233 14:06:35 -- nvmf/common.sh@7 -- # uname -s 00:24:44.233 14:06:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.233 14:06:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.233 14:06:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.233 14:06:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.233 14:06:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.233 14:06:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.233 14:06:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.233 14:06:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.233 14:06:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.233 14:06:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.233 14:06:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:44.233 14:06:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:44.233 14:06:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.233 14:06:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.233 14:06:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.233 14:06:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.233 14:06:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.233 14:06:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.233 14:06:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.233 14:06:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.233 14:06:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.233 14:06:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.233 14:06:35 -- paths/export.sh@5 -- # export PATH 00:24:44.234 14:06:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.234 14:06:35 -- nvmf/common.sh@46 -- # : 0 00:24:44.234 14:06:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:44.234 14:06:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:44.234 14:06:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.234 14:06:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.234 14:06:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:44.234 14:06:35 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:44.234 14:06:35 -- host/dma.sh@13 -- # exit 0 00:24:44.234 00:24:44.234 real 0m0.105s 00:24:44.234 user 0m0.053s 00:24:44.234 sys 0m0.059s 00:24:44.234 14:06:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.234 14:06:35 -- common/autotest_common.sh@10 -- # set +x 00:24:44.234 ************************************ 00:24:44.234 END TEST dma 00:24:44.234 ************************************ 00:24:44.234 14:06:35 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:44.234 14:06:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:44.234 14:06:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:44.234 14:06:35 -- common/autotest_common.sh@10 -- # set +x 00:24:44.234 ************************************ 00:24:44.234 START TEST nvmf_identify 00:24:44.234 ************************************ 00:24:44.234 14:06:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:44.234 * Looking for test storage... 00:24:44.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:44.234 14:06:35 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.234 14:06:35 -- nvmf/common.sh@7 -- # uname -s 00:24:44.234 14:06:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.234 14:06:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.234 14:06:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.234 14:06:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.234 14:06:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.234 14:06:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.234 14:06:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.234 14:06:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.234 14:06:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.234 14:06:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.234 14:06:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:44.234 14:06:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:44.234 14:06:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.234 14:06:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.234 14:06:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.234 14:06:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.234 14:06:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.234 14:06:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.234 14:06:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.234 14:06:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.234 14:06:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.234 14:06:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.234 14:06:35 -- paths/export.sh@5 -- # export PATH 00:24:44.234 14:06:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.234 14:06:35 -- nvmf/common.sh@46 -- # : 0 00:24:44.234 14:06:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:44.234 14:06:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:44.234 14:06:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.234 14:06:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.234 14:06:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:44.234 14:06:35 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.234 14:06:35 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.234 14:06:35 -- host/identify.sh@14 -- # nvmftestinit 00:24:44.234 14:06:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:44.234 14:06:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.234 14:06:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:44.234 14:06:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:44.234 14:06:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:44.234 14:06:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.234 14:06:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.234 14:06:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.234 14:06:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:44.234 14:06:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:44.234 14:06:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:44.234 14:06:35 -- common/autotest_common.sh@10 -- # set +x 00:24:49.515 14:06:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:49.515 14:06:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:49.515 14:06:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:49.515 14:06:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:49.515 14:06:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:49.515 14:06:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:49.515 14:06:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:49.515 14:06:40 -- nvmf/common.sh@294 -- # net_devs=() 00:24:49.515 14:06:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:49.515 14:06:40 -- nvmf/common.sh@295 -- # e810=() 00:24:49.515 14:06:40 -- nvmf/common.sh@295 -- # local -ga e810 00:24:49.515 14:06:40 -- nvmf/common.sh@296 -- # x722=() 00:24:49.515 14:06:40 -- nvmf/common.sh@296 -- # local -ga x722 00:24:49.515 14:06:40 -- nvmf/common.sh@297 -- # mlx=() 00:24:49.515 14:06:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:49.515 14:06:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.515 14:06:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:49.515 14:06:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:49.515 14:06:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:49.515 14:06:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:49.515 14:06:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:49.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:49.515 14:06:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:49.515 14:06:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:49.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:49.515 14:06:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:49.515 14:06:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:49.515 14:06:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.515 14:06:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:49.515 14:06:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.515 14:06:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:49.515 Found net devices under 0000:86:00.0: cvl_0_0 00:24:49.515 14:06:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.515 14:06:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:49.515 14:06:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.515 14:06:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:49.515 14:06:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.515 14:06:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:49.515 Found net devices under 0000:86:00.1: cvl_0_1 00:24:49.515 14:06:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.515 14:06:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:49.515 14:06:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:49.515 14:06:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:49.515 14:06:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.515 14:06:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.515 14:06:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.515 14:06:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:49.515 14:06:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.515 14:06:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.515 14:06:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:49.515 14:06:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.515 14:06:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.515 14:06:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:49.515 14:06:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:49.515 14:06:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.515 14:06:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.515 14:06:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.515 14:06:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.515 14:06:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:49.515 14:06:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.515 14:06:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.515 14:06:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.515 14:06:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:49.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:24:49.515 00:24:49.515 --- 10.0.0.2 ping statistics --- 00:24:49.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.515 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:49.515 14:06:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:24:49.515 00:24:49.515 --- 10.0.0.1 ping statistics --- 00:24:49.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.515 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:49.515 14:06:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.515 14:06:40 -- nvmf/common.sh@410 -- # return 0 00:24:49.515 14:06:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:49.515 14:06:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.515 14:06:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:49.515 14:06:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.516 14:06:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:49.516 14:06:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:49.516 14:06:40 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:49.516 14:06:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:49.516 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 14:06:40 -- host/identify.sh@19 -- # nvmfpid=3370134 00:24:49.516 14:06:40 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:49.516 14:06:40 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.516 14:06:40 -- host/identify.sh@23 -- # waitforlisten 3370134 00:24:49.516 14:06:40 -- common/autotest_common.sh@819 -- # '[' -z 3370134 ']' 00:24:49.516 14:06:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.516 14:06:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:49.516 14:06:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.516 14:06:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:49.516 14:06:40 -- common/autotest_common.sh@10 -- # set +x 00:24:49.516 [2024-07-23 14:06:40.351779] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:49.516 [2024-07-23 14:06:40.351824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.516 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.516 [2024-07-23 14:06:40.411153] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:49.516 [2024-07-23 14:06:40.484516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:49.516 [2024-07-23 14:06:40.484634] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.516 [2024-07-23 14:06:40.484641] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.516 [2024-07-23 14:06:40.484649] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.516 [2024-07-23 14:06:40.484700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.516 [2024-07-23 14:06:40.484796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.516 [2024-07-23 14:06:40.484861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:49.516 [2024-07-23 14:06:40.484862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.456 14:06:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:50.456 14:06:41 -- common/autotest_common.sh@852 -- # return 0 00:24:50.456 14:06:41 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 [2024-07-23 14:06:41.158237] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:50.456 14:06:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 14:06:41 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 Malloc0 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 [2024-07-23 14:06:41.241973] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:50.456 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.456 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.456 [2024-07-23 14:06:41.257793] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:50.456 [ 00:24:50.456 { 00:24:50.456 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:50.456 "subtype": "Discovery", 00:24:50.456 "listen_addresses": [ 00:24:50.456 { 00:24:50.456 "transport": "TCP", 00:24:50.456 "trtype": "TCP", 00:24:50.456 "adrfam": "IPv4", 00:24:50.456 "traddr": "10.0.0.2", 00:24:50.456 "trsvcid": "4420" 00:24:50.456 } 00:24:50.456 ], 00:24:50.456 "allow_any_host": true, 00:24:50.456 "hosts": [] 00:24:50.456 }, 00:24:50.456 { 00:24:50.456 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:50.456 "subtype": "NVMe", 00:24:50.456 "listen_addresses": [ 00:24:50.456 { 00:24:50.456 "transport": "TCP", 00:24:50.456 "trtype": "TCP", 00:24:50.456 "adrfam": "IPv4", 00:24:50.456 "traddr": "10.0.0.2", 00:24:50.456 "trsvcid": "4420" 00:24:50.456 } 00:24:50.456 ], 00:24:50.456 "allow_any_host": true, 00:24:50.456 "hosts": [], 00:24:50.456 "serial_number": "SPDK00000000000001", 00:24:50.456 "model_number": "SPDK bdev Controller", 00:24:50.456 "max_namespaces": 32, 00:24:50.456 "min_cntlid": 1, 00:24:50.456 "max_cntlid": 65519, 00:24:50.456 "namespaces": [ 00:24:50.456 { 00:24:50.456 "nsid": 1, 00:24:50.456 "bdev_name": "Malloc0", 00:24:50.456 "name": "Malloc0", 00:24:50.456 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:50.456 "eui64": "ABCDEF0123456789", 00:24:50.456 "uuid": "b5644b93-7d72-4d5f-bcac-d2cd5ab30106" 00:24:50.456 } 00:24:50.456 ] 00:24:50.456 } 00:24:50.456 ] 00:24:50.456 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.456 14:06:41 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:50.456 [2024-07-23 14:06:41.291896] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:50.456 [2024-07-23 14:06:41.291942] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370193 ] 00:24:50.456 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.456 [2024-07-23 14:06:41.320572] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:50.456 [2024-07-23 14:06:41.320621] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:50.456 [2024-07-23 14:06:41.320627] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:50.456 [2024-07-23 14:06:41.320638] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:50.456 [2024-07-23 14:06:41.320646] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:50.456 [2024-07-23 14:06:41.321124] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:50.456 [2024-07-23 14:06:41.321160] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc049e0 0 00:24:50.456 [2024-07-23 14:06:41.336053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:50.456 [2024-07-23 14:06:41.336073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:50.456 [2024-07-23 14:06:41.336077] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:50.456 [2024-07-23 14:06:41.336081] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:50.456 [2024-07-23 14:06:41.336116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.336122] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.336126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.456 [2024-07-23 14:06:41.336139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:50.456 [2024-07-23 14:06:41.336155] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.456 [2024-07-23 14:06:41.344051] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.456 [2024-07-23 14:06:41.344060] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.456 [2024-07-23 14:06:41.344064] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344067] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.456 [2024-07-23 14:06:41.344079] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:50.456 [2024-07-23 14:06:41.344085] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:50.456 [2024-07-23 14:06:41.344090] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:50.456 [2024-07-23 14:06:41.344101] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.456 [2024-07-23 14:06:41.344115] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-23 14:06:41.344128] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.456 [2024-07-23 14:06:41.344274] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.456 [2024-07-23 14:06:41.344287] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.456 [2024-07-23 14:06:41.344290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344294] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.456 [2024-07-23 14:06:41.344300] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:50.456 [2024-07-23 14:06:41.344309] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:50.456 [2024-07-23 14:06:41.344317] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344320] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344324] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.456 [2024-07-23 14:06:41.344332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.456 [2024-07-23 14:06:41.344345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.456 [2024-07-23 14:06:41.344473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.456 [2024-07-23 14:06:41.344482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.456 [2024-07-23 14:06:41.344485] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.456 [2024-07-23 14:06:41.344489] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.456 [2024-07-23 14:06:41.344498] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:50.456 [2024-07-23 14:06:41.344506] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:50.456 [2024-07-23 14:06:41.344514] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.344517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.344520] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.344527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.457 [2024-07-23 14:06:41.344540] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.457 [2024-07-23 14:06:41.344672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.457 [2024-07-23 14:06:41.344681] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.457 [2024-07-23 14:06:41.344684] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.344688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.457 [2024-07-23 14:06:41.344693] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:50.457 [2024-07-23 14:06:41.344704] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.344707] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.344711] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.344718] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.457 [2024-07-23 14:06:41.344730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.457 [2024-07-23 14:06:41.344861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.457 [2024-07-23 14:06:41.344870] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.457 [2024-07-23 14:06:41.344873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.344876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.457 [2024-07-23 14:06:41.344881] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:50.457 [2024-07-23 14:06:41.344886] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:50.457 [2024-07-23 14:06:41.344893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:50.457 [2024-07-23 14:06:41.344999] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:50.457 [2024-07-23 14:06:41.345003] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:50.457 [2024-07-23 14:06:41.345011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.345024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.457 [2024-07-23 14:06:41.345036] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.457 [2024-07-23 14:06:41.345168] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.457 [2024-07-23 14:06:41.345182] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.457 [2024-07-23 14:06:41.345185] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345188] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.457 [2024-07-23 14:06:41.345193] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:50.457 [2024-07-23 14:06:41.345203] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345207] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.345217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.457 [2024-07-23 14:06:41.345229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.457 [2024-07-23 14:06:41.345358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.457 [2024-07-23 14:06:41.345368] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.457 [2024-07-23 14:06:41.345371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.457 [2024-07-23 14:06:41.345379] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:50.457 [2024-07-23 14:06:41.345383] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:50.457 [2024-07-23 14:06:41.345392] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:50.457 [2024-07-23 14:06:41.345404] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:50.457 [2024-07-23 14:06:41.345413] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.345427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.457 [2024-07-23 14:06:41.345439] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.457 [2024-07-23 14:06:41.345600] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.457 [2024-07-23 14:06:41.345610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.457 [2024-07-23 14:06:41.345614] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345617] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc049e0): datao=0, datal=4096, cccid=0 00:24:50.457 [2024-07-23 14:06:41.345621] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6c730) on tqpair(0xc049e0): expected_datao=0, payload_size=4096 00:24:50.457 [2024-07-23 14:06:41.345629] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345633] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345850] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.457 [2024-07-23 14:06:41.345856] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.457 [2024-07-23 14:06:41.345859] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345862] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.457 [2024-07-23 14:06:41.345873] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:50.457 [2024-07-23 14:06:41.345878] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:50.457 [2024-07-23 14:06:41.345882] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:50.457 [2024-07-23 14:06:41.345887] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:50.457 [2024-07-23 14:06:41.345891] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:50.457 [2024-07-23 14:06:41.345895] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:50.457 [2024-07-23 14:06:41.345906] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:50.457 [2024-07-23 14:06:41.345913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.345920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.345927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:50.457 [2024-07-23 14:06:41.345939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.457 [2024-07-23 14:06:41.346103] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.457 [2024-07-23 14:06:41.346113] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.457 [2024-07-23 14:06:41.346117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6c730) on tqpair=0xc049e0 00:24:50.457 [2024-07-23 14:06:41.346128] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346131] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346134] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.346141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-07-23 14:06:41.346146] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346149] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346152] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.346157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-07-23 14:06:41.346163] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346166] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346169] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.346174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-07-23 14:06:41.346179] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346182] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.457 [2024-07-23 14:06:41.346185] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.457 [2024-07-23 14:06:41.346190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.457 [2024-07-23 14:06:41.346195] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:50.458 [2024-07-23 14:06:41.346209] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:50.458 [2024-07-23 14:06:41.346216] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346219] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346222] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc049e0) 00:24:50.458 [2024-07-23 14:06:41.346228] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.458 [2024-07-23 14:06:41.346242] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c730, cid 0, qid 0 00:24:50.458 [2024-07-23 14:06:41.346247] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c890, cid 1, qid 0 00:24:50.458 [2024-07-23 14:06:41.346251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6c9f0, cid 2, qid 0 00:24:50.458 [2024-07-23 14:06:41.346255] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.458 [2024-07-23 14:06:41.346259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ccb0, cid 4, qid 0 00:24:50.458 [2024-07-23 14:06:41.346432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.458 [2024-07-23 14:06:41.346441] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.458 [2024-07-23 14:06:41.346445] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ccb0) on tqpair=0xc049e0 00:24:50.458 [2024-07-23 14:06:41.346453] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:50.458 [2024-07-23 14:06:41.346458] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:50.458 [2024-07-23 14:06:41.346470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346474] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc049e0) 00:24:50.458 [2024-07-23 14:06:41.346484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.458 [2024-07-23 14:06:41.346496] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ccb0, cid 4, qid 0 00:24:50.458 [2024-07-23 14:06:41.346637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.458 [2024-07-23 14:06:41.346647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.458 [2024-07-23 14:06:41.346650] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346655] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc049e0): datao=0, datal=4096, cccid=4 00:24:50.458 [2024-07-23 14:06:41.346659] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6ccb0) on tqpair(0xc049e0): expected_datao=0, payload_size=4096 00:24:50.458 [2024-07-23 14:06:41.346853] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.346856] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387175] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.458 [2024-07-23 14:06:41.387191] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.458 [2024-07-23 14:06:41.387194] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387198] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ccb0) on tqpair=0xc049e0 00:24:50.458 [2024-07-23 14:06:41.387212] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:50.458 [2024-07-23 14:06:41.387238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc049e0) 00:24:50.458 [2024-07-23 14:06:41.387254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.458 [2024-07-23 14:06:41.387261] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387264] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387267] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc049e0) 00:24:50.458 [2024-07-23 14:06:41.387273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.458 [2024-07-23 14:06:41.387289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ccb0, cid 4, qid 0 00:24:50.458 [2024-07-23 14:06:41.387294] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ce10, cid 5, qid 0 00:24:50.458 [2024-07-23 14:06:41.387464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.458 [2024-07-23 14:06:41.387474] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.458 [2024-07-23 14:06:41.387477] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387480] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc049e0): datao=0, datal=1024, cccid=4 00:24:50.458 [2024-07-23 14:06:41.387484] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6ccb0) on tqpair(0xc049e0): expected_datao=0, payload_size=1024 00:24:50.458 [2024-07-23 14:06:41.387490] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387494] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387499] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.458 [2024-07-23 14:06:41.387504] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.458 [2024-07-23 14:06:41.387507] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.387511] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ce10) on tqpair=0xc049e0 00:24:50.458 [2024-07-23 14:06:41.432055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.458 [2024-07-23 14:06:41.432067] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.458 [2024-07-23 14:06:41.432070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ccb0) on tqpair=0xc049e0 00:24:50.458 [2024-07-23 14:06:41.432087] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432090] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432094] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc049e0) 00:24:50.458 [2024-07-23 14:06:41.432101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.458 [2024-07-23 14:06:41.432117] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ccb0, cid 4, qid 0 00:24:50.458 [2024-07-23 14:06:41.432328] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.458 [2024-07-23 14:06:41.432338] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.458 [2024-07-23 14:06:41.432341] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432345] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc049e0): datao=0, datal=3072, cccid=4 00:24:50.458 [2024-07-23 14:06:41.432349] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6ccb0) on tqpair(0xc049e0): expected_datao=0, payload_size=3072 00:24:50.458 [2024-07-23 14:06:41.432355] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432359] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432576] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.458 [2024-07-23 14:06:41.432582] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.458 [2024-07-23 14:06:41.432585] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432588] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ccb0) on tqpair=0xc049e0 00:24:50.458 [2024-07-23 14:06:41.432598] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc049e0) 00:24:50.458 [2024-07-23 14:06:41.432611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.458 [2024-07-23 14:06:41.432626] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6ccb0, cid 4, qid 0 00:24:50.458 [2024-07-23 14:06:41.432775] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.458 [2024-07-23 14:06:41.432785] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.458 [2024-07-23 14:06:41.432789] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432792] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc049e0): datao=0, datal=8, cccid=4 00:24:50.458 [2024-07-23 14:06:41.432797] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xc6ccb0) on tqpair(0xc049e0): expected_datao=0, payload_size=8 00:24:50.458 [2024-07-23 14:06:41.432803] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.458 [2024-07-23 14:06:41.432807] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.723 [2024-07-23 14:06:41.473342] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.723 [2024-07-23 14:06:41.473353] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.723 [2024-07-23 14:06:41.473357] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.723 [2024-07-23 14:06:41.473360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6ccb0) on tqpair=0xc049e0 00:24:50.723 ===================================================== 00:24:50.723 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:50.723 ===================================================== 00:24:50.723 Controller Capabilities/Features 00:24:50.723 ================================ 00:24:50.723 Vendor ID: 0000 00:24:50.723 Subsystem Vendor ID: 0000 00:24:50.723 Serial Number: .................... 00:24:50.723 Model Number: ........................................ 00:24:50.723 Firmware Version: 24.01.1 00:24:50.723 Recommended Arb Burst: 0 00:24:50.723 IEEE OUI Identifier: 00 00 00 00:24:50.723 Multi-path I/O 00:24:50.723 May have multiple subsystem ports: No 00:24:50.723 May have multiple controllers: No 00:24:50.723 Associated with SR-IOV VF: No 00:24:50.723 Max Data Transfer Size: 131072 00:24:50.723 Max Number of Namespaces: 0 00:24:50.723 Max Number of I/O Queues: 1024 00:24:50.723 NVMe Specification Version (VS): 1.3 00:24:50.723 NVMe Specification Version (Identify): 1.3 00:24:50.723 Maximum Queue Entries: 128 00:24:50.723 Contiguous Queues Required: Yes 00:24:50.723 Arbitration Mechanisms Supported 00:24:50.723 Weighted Round Robin: Not Supported 00:24:50.723 Vendor Specific: Not Supported 00:24:50.723 Reset Timeout: 15000 ms 00:24:50.723 Doorbell Stride: 4 bytes 00:24:50.723 NVM Subsystem Reset: Not Supported 00:24:50.723 Command Sets Supported 00:24:50.723 NVM Command Set: Supported 00:24:50.723 Boot Partition: Not Supported 00:24:50.723 Memory Page Size Minimum: 4096 bytes 00:24:50.723 Memory Page Size Maximum: 4096 bytes 00:24:50.723 Persistent Memory Region: Not Supported 00:24:50.723 Optional Asynchronous Events Supported 00:24:50.723 Namespace Attribute Notices: Not Supported 00:24:50.723 Firmware Activation Notices: Not Supported 00:24:50.723 ANA Change Notices: Not Supported 00:24:50.723 PLE Aggregate Log Change Notices: Not Supported 00:24:50.723 LBA Status Info Alert Notices: Not Supported 00:24:50.723 EGE Aggregate Log Change Notices: Not Supported 00:24:50.723 Normal NVM Subsystem Shutdown event: Not Supported 00:24:50.723 Zone Descriptor Change Notices: Not Supported 00:24:50.723 Discovery Log Change Notices: Supported 00:24:50.723 Controller Attributes 00:24:50.723 128-bit Host Identifier: Not Supported 00:24:50.723 Non-Operational Permissive Mode: Not Supported 00:24:50.723 NVM Sets: Not Supported 00:24:50.723 Read Recovery Levels: Not Supported 00:24:50.723 Endurance Groups: Not Supported 00:24:50.723 Predictable Latency Mode: Not Supported 00:24:50.723 Traffic Based Keep ALive: Not Supported 00:24:50.723 Namespace Granularity: Not Supported 00:24:50.723 SQ Associations: Not Supported 00:24:50.723 UUID List: Not Supported 00:24:50.723 Multi-Domain Subsystem: Not Supported 00:24:50.723 Fixed Capacity Management: Not Supported 00:24:50.723 Variable Capacity Management: Not Supported 00:24:50.723 Delete Endurance Group: Not Supported 00:24:50.723 Delete NVM Set: Not Supported 00:24:50.723 Extended LBA Formats Supported: Not Supported 00:24:50.723 Flexible Data Placement Supported: Not Supported 00:24:50.723 00:24:50.723 Controller Memory Buffer Support 00:24:50.723 ================================ 00:24:50.723 Supported: No 00:24:50.723 00:24:50.723 Persistent Memory Region Support 00:24:50.723 ================================ 00:24:50.723 Supported: No 00:24:50.723 00:24:50.723 Admin Command Set Attributes 00:24:50.723 ============================ 00:24:50.723 Security Send/Receive: Not Supported 00:24:50.723 Format NVM: Not Supported 00:24:50.723 Firmware Activate/Download: Not Supported 00:24:50.723 Namespace Management: Not Supported 00:24:50.723 Device Self-Test: Not Supported 00:24:50.723 Directives: Not Supported 00:24:50.723 NVMe-MI: Not Supported 00:24:50.723 Virtualization Management: Not Supported 00:24:50.723 Doorbell Buffer Config: Not Supported 00:24:50.723 Get LBA Status Capability: Not Supported 00:24:50.723 Command & Feature Lockdown Capability: Not Supported 00:24:50.723 Abort Command Limit: 1 00:24:50.723 Async Event Request Limit: 4 00:24:50.723 Number of Firmware Slots: N/A 00:24:50.723 Firmware Slot 1 Read-Only: N/A 00:24:50.723 Firmware Activation Without Reset: N/A 00:24:50.723 Multiple Update Detection Support: N/A 00:24:50.723 Firmware Update Granularity: No Information Provided 00:24:50.723 Per-Namespace SMART Log: No 00:24:50.723 Asymmetric Namespace Access Log Page: Not Supported 00:24:50.723 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:50.723 Command Effects Log Page: Not Supported 00:24:50.723 Get Log Page Extended Data: Supported 00:24:50.723 Telemetry Log Pages: Not Supported 00:24:50.723 Persistent Event Log Pages: Not Supported 00:24:50.723 Supported Log Pages Log Page: May Support 00:24:50.723 Commands Supported & Effects Log Page: Not Supported 00:24:50.723 Feature Identifiers & Effects Log Page:May Support 00:24:50.723 NVMe-MI Commands & Effects Log Page: May Support 00:24:50.723 Data Area 4 for Telemetry Log: Not Supported 00:24:50.723 Error Log Page Entries Supported: 128 00:24:50.723 Keep Alive: Not Supported 00:24:50.723 00:24:50.723 NVM Command Set Attributes 00:24:50.723 ========================== 00:24:50.723 Submission Queue Entry Size 00:24:50.723 Max: 1 00:24:50.723 Min: 1 00:24:50.723 Completion Queue Entry Size 00:24:50.723 Max: 1 00:24:50.723 Min: 1 00:24:50.723 Number of Namespaces: 0 00:24:50.723 Compare Command: Not Supported 00:24:50.723 Write Uncorrectable Command: Not Supported 00:24:50.723 Dataset Management Command: Not Supported 00:24:50.723 Write Zeroes Command: Not Supported 00:24:50.723 Set Features Save Field: Not Supported 00:24:50.723 Reservations: Not Supported 00:24:50.723 Timestamp: Not Supported 00:24:50.723 Copy: Not Supported 00:24:50.723 Volatile Write Cache: Not Present 00:24:50.723 Atomic Write Unit (Normal): 1 00:24:50.723 Atomic Write Unit (PFail): 1 00:24:50.723 Atomic Compare & Write Unit: 1 00:24:50.723 Fused Compare & Write: Supported 00:24:50.723 Scatter-Gather List 00:24:50.723 SGL Command Set: Supported 00:24:50.723 SGL Keyed: Supported 00:24:50.723 SGL Bit Bucket Descriptor: Not Supported 00:24:50.723 SGL Metadata Pointer: Not Supported 00:24:50.723 Oversized SGL: Not Supported 00:24:50.723 SGL Metadata Address: Not Supported 00:24:50.723 SGL Offset: Supported 00:24:50.723 Transport SGL Data Block: Not Supported 00:24:50.723 Replay Protected Memory Block: Not Supported 00:24:50.723 00:24:50.723 Firmware Slot Information 00:24:50.723 ========================= 00:24:50.723 Active slot: 0 00:24:50.723 00:24:50.723 00:24:50.723 Error Log 00:24:50.723 ========= 00:24:50.723 00:24:50.723 Active Namespaces 00:24:50.723 ================= 00:24:50.723 Discovery Log Page 00:24:50.723 ================== 00:24:50.723 Generation Counter: 2 00:24:50.723 Number of Records: 2 00:24:50.723 Record Format: 0 00:24:50.723 00:24:50.723 Discovery Log Entry 0 00:24:50.723 ---------------------- 00:24:50.723 Transport Type: 3 (TCP) 00:24:50.723 Address Family: 1 (IPv4) 00:24:50.723 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:50.723 Entry Flags: 00:24:50.723 Duplicate Returned Information: 1 00:24:50.723 Explicit Persistent Connection Support for Discovery: 1 00:24:50.723 Transport Requirements: 00:24:50.723 Secure Channel: Not Required 00:24:50.723 Port ID: 0 (0x0000) 00:24:50.723 Controller ID: 65535 (0xffff) 00:24:50.723 Admin Max SQ Size: 128 00:24:50.723 Transport Service Identifier: 4420 00:24:50.723 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:50.723 Transport Address: 10.0.0.2 00:24:50.723 Discovery Log Entry 1 00:24:50.724 ---------------------- 00:24:50.724 Transport Type: 3 (TCP) 00:24:50.724 Address Family: 1 (IPv4) 00:24:50.724 Subsystem Type: 2 (NVM Subsystem) 00:24:50.724 Entry Flags: 00:24:50.724 Duplicate Returned Information: 0 00:24:50.724 Explicit Persistent Connection Support for Discovery: 0 00:24:50.724 Transport Requirements: 00:24:50.724 Secure Channel: Not Required 00:24:50.724 Port ID: 0 (0x0000) 00:24:50.724 Controller ID: 65535 (0xffff) 00:24:50.724 Admin Max SQ Size: 128 00:24:50.724 Transport Service Identifier: 4420 00:24:50.724 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:50.724 Transport Address: 10.0.0.2 [2024-07-23 14:06:41.473449] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:50.724 [2024-07-23 14:06:41.473462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.724 [2024-07-23 14:06:41.473468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.724 [2024-07-23 14:06:41.473473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.724 [2024-07-23 14:06:41.473478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.724 [2024-07-23 14:06:41.473486] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473489] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473493] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.473499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.473513] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.473645] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.473655] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.473658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.473668] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473674] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.473685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.473701] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.473839] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.473848] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.473851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.473860] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:50.724 [2024-07-23 14:06:41.473864] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:50.724 [2024-07-23 14:06:41.473875] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.473881] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.473888] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.473900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.474029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.474038] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.474041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474053] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.474065] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474069] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.474079] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.474091] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.474224] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.474233] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.474237] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474240] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.474250] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474254] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474257] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.474263] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.474276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.474404] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.474413] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.474416] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474420] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.474433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474437] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474440] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.474446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.474458] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.474594] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.474603] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.474606] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474609] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.474619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474626] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.474632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.474644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.474769] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.474778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.474781] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474785] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.474795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474799] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.474808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.474820] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.474957] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.474966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.474969] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.474983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474986] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.474990] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.474996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.475008] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.724 [2024-07-23 14:06:41.475152] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.724 [2024-07-23 14:06:41.475162] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.724 [2024-07-23 14:06:41.475165] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.475169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.724 [2024-07-23 14:06:41.475182] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.475186] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.724 [2024-07-23 14:06:41.475189] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.724 [2024-07-23 14:06:41.475195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.724 [2024-07-23 14:06:41.475207] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.725 [2024-07-23 14:06:41.475343] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.475352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.475355] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475358] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.725 [2024-07-23 14:06:41.475368] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475372] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475375] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.725 [2024-07-23 14:06:41.475382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.475393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.725 [2024-07-23 14:06:41.475520] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.475529] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.475532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475535] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.725 [2024-07-23 14:06:41.475546] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.725 [2024-07-23 14:06:41.475559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.475570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.725 [2024-07-23 14:06:41.475710] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.475719] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.475722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475725] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.725 [2024-07-23 14:06:41.475736] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475739] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475743] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.725 [2024-07-23 14:06:41.475749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.475760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.725 [2024-07-23 14:06:41.475889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.475897] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.475900] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475904] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.725 [2024-07-23 14:06:41.475914] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.475923] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.725 [2024-07-23 14:06:41.475930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.475942] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.725 [2024-07-23 14:06:41.480052] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.480064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.480067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.480071] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.725 [2024-07-23 14:06:41.480082] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.480086] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.480089] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc049e0) 00:24:50.725 [2024-07-23 14:06:41.480096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.480109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xc6cb50, cid 3, qid 0 00:24:50.725 [2024-07-23 14:06:41.480274] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.480284] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.480287] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.480290] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xc6cb50) on tqpair=0xc049e0 00:24:50.725 [2024-07-23 14:06:41.480299] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:50.725 00:24:50.725 14:06:41 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:50.725 [2024-07-23 14:06:41.514900] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:50.725 [2024-07-23 14:06:41.514946] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370256 ] 00:24:50.725 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.725 [2024-07-23 14:06:41.542291] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:50.725 [2024-07-23 14:06:41.542329] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:50.725 [2024-07-23 14:06:41.542333] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:50.725 [2024-07-23 14:06:41.542344] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:50.725 [2024-07-23 14:06:41.542350] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:50.725 [2024-07-23 14:06:41.542824] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:50.725 [2024-07-23 14:06:41.542849] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x21aa9e0 0 00:24:50.725 [2024-07-23 14:06:41.557058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:50.725 [2024-07-23 14:06:41.557076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:50.725 [2024-07-23 14:06:41.557079] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:50.725 [2024-07-23 14:06:41.557085] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:50.725 [2024-07-23 14:06:41.557115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.557121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.557124] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.725 [2024-07-23 14:06:41.557134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:50.725 [2024-07-23 14:06:41.557149] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.725 [2024-07-23 14:06:41.565056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.565064] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.565067] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.725 [2024-07-23 14:06:41.565098] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:50.725 [2024-07-23 14:06:41.565104] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:50.725 [2024-07-23 14:06:41.565108] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:50.725 [2024-07-23 14:06:41.565118] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565125] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.725 [2024-07-23 14:06:41.565131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.565143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.725 [2024-07-23 14:06:41.565349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.565362] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.565365] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565369] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.725 [2024-07-23 14:06:41.565375] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:50.725 [2024-07-23 14:06:41.565383] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:50.725 [2024-07-23 14:06:41.565391] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565394] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565398] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.725 [2024-07-23 14:06:41.565405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.725 [2024-07-23 14:06:41.565420] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.725 [2024-07-23 14:06:41.565554] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.725 [2024-07-23 14:06:41.565563] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.725 [2024-07-23 14:06:41.565566] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.725 [2024-07-23 14:06:41.565569] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.565575] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:50.726 [2024-07-23 14:06:41.565584] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:50.726 [2024-07-23 14:06:41.565593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.565597] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.565600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.565608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.726 [2024-07-23 14:06:41.565621] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.726 [2024-07-23 14:06:41.565756] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.726 [2024-07-23 14:06:41.565765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.726 [2024-07-23 14:06:41.565768] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.565771] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.565777] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:50.726 [2024-07-23 14:06:41.565788] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.565792] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.565795] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.565802] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.726 [2024-07-23 14:06:41.565815] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.726 [2024-07-23 14:06:41.565952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.726 [2024-07-23 14:06:41.565961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.726 [2024-07-23 14:06:41.565964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.565967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.565972] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:50.726 [2024-07-23 14:06:41.565977] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:50.726 [2024-07-23 14:06:41.565986] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:50.726 [2024-07-23 14:06:41.566091] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:50.726 [2024-07-23 14:06:41.566094] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:50.726 [2024-07-23 14:06:41.566102] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566105] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566108] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.566116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.726 [2024-07-23 14:06:41.566129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.726 [2024-07-23 14:06:41.566414] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.726 [2024-07-23 14:06:41.566419] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.726 [2024-07-23 14:06:41.566422] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566425] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.566430] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:50.726 [2024-07-23 14:06:41.566441] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566445] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566448] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.566454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.726 [2024-07-23 14:06:41.566463] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.726 [2024-07-23 14:06:41.566598] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.726 [2024-07-23 14:06:41.566607] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.726 [2024-07-23 14:06:41.566610] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566613] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.566618] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:50.726 [2024-07-23 14:06:41.566622] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:50.726 [2024-07-23 14:06:41.566631] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:50.726 [2024-07-23 14:06:41.566642] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:50.726 [2024-07-23 14:06:41.566651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566654] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566657] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.566664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.726 [2024-07-23 14:06:41.566676] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.726 [2024-07-23 14:06:41.566845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.726 [2024-07-23 14:06:41.566855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.726 [2024-07-23 14:06:41.566859] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566862] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=4096, cccid=0 00:24:50.726 [2024-07-23 14:06:41.566866] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212730) on tqpair(0x21aa9e0): expected_datao=0, payload_size=4096 00:24:50.726 [2024-07-23 14:06:41.566872] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566876] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.726 [2024-07-23 14:06:41.566968] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.726 [2024-07-23 14:06:41.566971] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.566974] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.566982] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:50.726 [2024-07-23 14:06:41.566987] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:50.726 [2024-07-23 14:06:41.566991] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:50.726 [2024-07-23 14:06:41.566995] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:50.726 [2024-07-23 14:06:41.567001] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:50.726 [2024-07-23 14:06:41.567005] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:50.726 [2024-07-23 14:06:41.567017] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:50.726 [2024-07-23 14:06:41.567024] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567028] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567031] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.567038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:50.726 [2024-07-23 14:06:41.567058] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.726 [2024-07-23 14:06:41.567192] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.726 [2024-07-23 14:06:41.567201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.726 [2024-07-23 14:06:41.567204] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567208] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212730) on tqpair=0x21aa9e0 00:24:50.726 [2024-07-23 14:06:41.567215] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567218] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567223] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.567231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.726 [2024-07-23 14:06:41.567238] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567243] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.567251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.726 [2024-07-23 14:06:41.567256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567259] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567262] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.567267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.726 [2024-07-23 14:06:41.567272] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567275] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.726 [2024-07-23 14:06:41.567278] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.726 [2024-07-23 14:06:41.567283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.726 [2024-07-23 14:06:41.567287] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567299] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567305] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567309] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567312] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.727 [2024-07-23 14:06:41.567319] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.727 [2024-07-23 14:06:41.567333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212730, cid 0, qid 0 00:24:50.727 [2024-07-23 14:06:41.567338] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212890, cid 1, qid 0 00:24:50.727 [2024-07-23 14:06:41.567342] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22129f0, cid 2, qid 0 00:24:50.727 [2024-07-23 14:06:41.567345] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.727 [2024-07-23 14:06:41.567349] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.727 [2024-07-23 14:06:41.567524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.727 [2024-07-23 14:06:41.567533] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.727 [2024-07-23 14:06:41.567535] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.727 [2024-07-23 14:06:41.567544] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:50.727 [2024-07-23 14:06:41.567549] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567557] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567567] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.727 [2024-07-23 14:06:41.567586] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:50.727 [2024-07-23 14:06:41.567598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.727 [2024-07-23 14:06:41.567732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.727 [2024-07-23 14:06:41.567740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.727 [2024-07-23 14:06:41.567743] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.727 [2024-07-23 14:06:41.567802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567812] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.567819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567822] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.567826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.727 [2024-07-23 14:06:41.567831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.727 [2024-07-23 14:06:41.567844] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.727 [2024-07-23 14:06:41.568001] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.727 [2024-07-23 14:06:41.568010] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.727 [2024-07-23 14:06:41.568013] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568019] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=4096, cccid=4 00:24:50.727 [2024-07-23 14:06:41.568023] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212cb0) on tqpair(0x21aa9e0): expected_datao=0, payload_size=4096 00:24:50.727 [2024-07-23 14:06:41.568030] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568033] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.727 [2024-07-23 14:06:41.568128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.727 [2024-07-23 14:06:41.568131] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568135] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.727 [2024-07-23 14:06:41.568150] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:50.727 [2024-07-23 14:06:41.568158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.568168] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.568175] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568179] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568182] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.727 [2024-07-23 14:06:41.568188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.727 [2024-07-23 14:06:41.568201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.727 [2024-07-23 14:06:41.568347] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.727 [2024-07-23 14:06:41.568356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.727 [2024-07-23 14:06:41.568359] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568362] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=4096, cccid=4 00:24:50.727 [2024-07-23 14:06:41.568366] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212cb0) on tqpair(0x21aa9e0): expected_datao=0, payload_size=4096 00:24:50.727 [2024-07-23 14:06:41.568572] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568576] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.727 [2024-07-23 14:06:41.568688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.727 [2024-07-23 14:06:41.568691] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568694] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.727 [2024-07-23 14:06:41.568711] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.568721] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:50.727 [2024-07-23 14:06:41.568729] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568735] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.727 [2024-07-23 14:06:41.568741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.727 [2024-07-23 14:06:41.568754] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.727 [2024-07-23 14:06:41.568970] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.727 [2024-07-23 14:06:41.568980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.727 [2024-07-23 14:06:41.568983] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.568986] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=4096, cccid=4 00:24:50.727 [2024-07-23 14:06:41.568990] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212cb0) on tqpair(0x21aa9e0): expected_datao=0, payload_size=4096 00:24:50.727 [2024-07-23 14:06:41.568997] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.569000] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.727 [2024-07-23 14:06:41.573055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.727 [2024-07-23 14:06:41.573063] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.727 [2024-07-23 14:06:41.573066] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573069] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.728 [2024-07-23 14:06:41.573078] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:50.728 [2024-07-23 14:06:41.573086] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:50.728 [2024-07-23 14:06:41.573095] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:50.728 [2024-07-23 14:06:41.573101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:50.728 [2024-07-23 14:06:41.573106] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:50.728 [2024-07-23 14:06:41.573110] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:50.728 [2024-07-23 14:06:41.573114] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:50.728 [2024-07-23 14:06:41.573119] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:50.728 [2024-07-23 14:06:41.573132] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573136] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.573146] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.573151] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573154] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573158] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.573163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:50.728 [2024-07-23 14:06:41.573196] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.728 [2024-07-23 14:06:41.573201] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212e10, cid 5, qid 0 00:24:50.728 [2024-07-23 14:06:41.573366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.728 [2024-07-23 14:06:41.573375] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.728 [2024-07-23 14:06:41.573379] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573382] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.728 [2024-07-23 14:06:41.573392] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.728 [2024-07-23 14:06:41.573397] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.728 [2024-07-23 14:06:41.573400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573403] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212e10) on tqpair=0x21aa9e0 00:24:50.728 [2024-07-23 14:06:41.573414] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573417] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.573426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.573438] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212e10, cid 5, qid 0 00:24:50.728 [2024-07-23 14:06:41.573572] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.728 [2024-07-23 14:06:41.573581] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.728 [2024-07-23 14:06:41.573584] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573587] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212e10) on tqpair=0x21aa9e0 00:24:50.728 [2024-07-23 14:06:41.573597] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573601] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573604] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.573611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.573623] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212e10, cid 5, qid 0 00:24:50.728 [2024-07-23 14:06:41.573760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.728 [2024-07-23 14:06:41.573769] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.728 [2024-07-23 14:06:41.573772] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573775] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212e10) on tqpair=0x21aa9e0 00:24:50.728 [2024-07-23 14:06:41.573786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573790] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573793] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.573799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.573811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212e10, cid 5, qid 0 00:24:50.728 [2024-07-23 14:06:41.573939] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.728 [2024-07-23 14:06:41.573948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.728 [2024-07-23 14:06:41.573952] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573955] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212e10) on tqpair=0x21aa9e0 00:24:50.728 [2024-07-23 14:06:41.573969] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573973] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573976] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.573982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.573989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.573998] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.574003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.574009] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574016] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.574021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.574027] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574030] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574033] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21aa9e0) 00:24:50.728 [2024-07-23 14:06:41.574038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.728 [2024-07-23 14:06:41.574061] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212e10, cid 5, qid 0 00:24:50.728 [2024-07-23 14:06:41.574066] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212cb0, cid 4, qid 0 00:24:50.728 [2024-07-23 14:06:41.574070] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212f70, cid 6, qid 0 00:24:50.728 [2024-07-23 14:06:41.574074] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22130d0, cid 7, qid 0 00:24:50.728 [2024-07-23 14:06:41.574349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.728 [2024-07-23 14:06:41.574360] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.728 [2024-07-23 14:06:41.574363] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574366] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=8192, cccid=5 00:24:50.728 [2024-07-23 14:06:41.574370] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212e10) on tqpair(0x21aa9e0): expected_datao=0, payload_size=8192 00:24:50.728 [2024-07-23 14:06:41.574377] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574380] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574385] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.728 [2024-07-23 14:06:41.574390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.728 [2024-07-23 14:06:41.574393] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574396] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=512, cccid=4 00:24:50.728 [2024-07-23 14:06:41.574400] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212cb0) on tqpair(0x21aa9e0): expected_datao=0, payload_size=512 00:24:50.728 [2024-07-23 14:06:41.574405] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574409] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.728 [2024-07-23 14:06:41.574418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.728 [2024-07-23 14:06:41.574421] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574424] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=512, cccid=6 00:24:50.728 [2024-07-23 14:06:41.574428] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2212f70) on tqpair(0x21aa9e0): expected_datao=0, payload_size=512 00:24:50.728 [2024-07-23 14:06:41.574436] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574440] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574444] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:50.728 [2024-07-23 14:06:41.574449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:50.728 [2024-07-23 14:06:41.574452] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:50.728 [2024-07-23 14:06:41.574455] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x21aa9e0): datao=0, datal=4096, cccid=7 00:24:50.729 [2024-07-23 14:06:41.574459] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x22130d0) on tqpair(0x21aa9e0): expected_datao=0, payload_size=4096 00:24:50.729 [2024-07-23 14:06:41.574465] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:50.729 [2024-07-23 14:06:41.574468] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:50.729 [2024-07-23 14:06:41.574612] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.729 [2024-07-23 14:06:41.574618] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.729 [2024-07-23 14:06:41.574621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.729 [2024-07-23 14:06:41.574624] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212e10) on tqpair=0x21aa9e0 00:24:50.729 [2024-07-23 14:06:41.574638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.729 [2024-07-23 14:06:41.574643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.729 [2024-07-23 14:06:41.574646] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.729 [2024-07-23 14:06:41.574649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212cb0) on tqpair=0x21aa9e0 00:24:50.729 [2024-07-23 14:06:41.574658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.729 [2024-07-23 14:06:41.574663] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.729 [2024-07-23 14:06:41.574666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.729 [2024-07-23 14:06:41.574669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212f70) on tqpair=0x21aa9e0 00:24:50.729 [2024-07-23 14:06:41.574675] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.729 [2024-07-23 14:06:41.574680] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.729 [2024-07-23 14:06:41.574683] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.729 [2024-07-23 14:06:41.574686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22130d0) on tqpair=0x21aa9e0 00:24:50.729 ===================================================== 00:24:50.729 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.729 ===================================================== 00:24:50.729 Controller Capabilities/Features 00:24:50.729 ================================ 00:24:50.729 Vendor ID: 8086 00:24:50.729 Subsystem Vendor ID: 8086 00:24:50.729 Serial Number: SPDK00000000000001 00:24:50.729 Model Number: SPDK bdev Controller 00:24:50.729 Firmware Version: 24.01.1 00:24:50.729 Recommended Arb Burst: 6 00:24:50.729 IEEE OUI Identifier: e4 d2 5c 00:24:50.729 Multi-path I/O 00:24:50.729 May have multiple subsystem ports: Yes 00:24:50.729 May have multiple controllers: Yes 00:24:50.729 Associated with SR-IOV VF: No 00:24:50.729 Max Data Transfer Size: 131072 00:24:50.729 Max Number of Namespaces: 32 00:24:50.729 Max Number of I/O Queues: 127 00:24:50.729 NVMe Specification Version (VS): 1.3 00:24:50.729 NVMe Specification Version (Identify): 1.3 00:24:50.729 Maximum Queue Entries: 128 00:24:50.729 Contiguous Queues Required: Yes 00:24:50.729 Arbitration Mechanisms Supported 00:24:50.729 Weighted Round Robin: Not Supported 00:24:50.729 Vendor Specific: Not Supported 00:24:50.729 Reset Timeout: 15000 ms 00:24:50.729 Doorbell Stride: 4 bytes 00:24:50.729 NVM Subsystem Reset: Not Supported 00:24:50.729 Command Sets Supported 00:24:50.729 NVM Command Set: Supported 00:24:50.729 Boot Partition: Not Supported 00:24:50.729 Memory Page Size Minimum: 4096 bytes 00:24:50.729 Memory Page Size Maximum: 4096 bytes 00:24:50.729 Persistent Memory Region: Not Supported 00:24:50.729 Optional Asynchronous Events Supported 00:24:50.729 Namespace Attribute Notices: Supported 00:24:50.729 Firmware Activation Notices: Not Supported 00:24:50.729 ANA Change Notices: Not Supported 00:24:50.729 PLE Aggregate Log Change Notices: Not Supported 00:24:50.729 LBA Status Info Alert Notices: Not Supported 00:24:50.729 EGE Aggregate Log Change Notices: Not Supported 00:24:50.729 Normal NVM Subsystem Shutdown event: Not Supported 00:24:50.729 Zone Descriptor Change Notices: Not Supported 00:24:50.729 Discovery Log Change Notices: Not Supported 00:24:50.729 Controller Attributes 00:24:50.729 128-bit Host Identifier: Supported 00:24:50.729 Non-Operational Permissive Mode: Not Supported 00:24:50.729 NVM Sets: Not Supported 00:24:50.729 Read Recovery Levels: Not Supported 00:24:50.729 Endurance Groups: Not Supported 00:24:50.729 Predictable Latency Mode: Not Supported 00:24:50.729 Traffic Based Keep ALive: Not Supported 00:24:50.729 Namespace Granularity: Not Supported 00:24:50.729 SQ Associations: Not Supported 00:24:50.729 UUID List: Not Supported 00:24:50.729 Multi-Domain Subsystem: Not Supported 00:24:50.729 Fixed Capacity Management: Not Supported 00:24:50.729 Variable Capacity Management: Not Supported 00:24:50.729 Delete Endurance Group: Not Supported 00:24:50.729 Delete NVM Set: Not Supported 00:24:50.729 Extended LBA Formats Supported: Not Supported 00:24:50.729 Flexible Data Placement Supported: Not Supported 00:24:50.729 00:24:50.729 Controller Memory Buffer Support 00:24:50.729 ================================ 00:24:50.729 Supported: No 00:24:50.729 00:24:50.729 Persistent Memory Region Support 00:24:50.729 ================================ 00:24:50.729 Supported: No 00:24:50.729 00:24:50.729 Admin Command Set Attributes 00:24:50.729 ============================ 00:24:50.729 Security Send/Receive: Not Supported 00:24:50.729 Format NVM: Not Supported 00:24:50.729 Firmware Activate/Download: Not Supported 00:24:50.729 Namespace Management: Not Supported 00:24:50.729 Device Self-Test: Not Supported 00:24:50.729 Directives: Not Supported 00:24:50.729 NVMe-MI: Not Supported 00:24:50.729 Virtualization Management: Not Supported 00:24:50.729 Doorbell Buffer Config: Not Supported 00:24:50.729 Get LBA Status Capability: Not Supported 00:24:50.729 Command & Feature Lockdown Capability: Not Supported 00:24:50.729 Abort Command Limit: 4 00:24:50.729 Async Event Request Limit: 4 00:24:50.729 Number of Firmware Slots: N/A 00:24:50.729 Firmware Slot 1 Read-Only: N/A 00:24:50.729 Firmware Activation Without Reset: N/A 00:24:50.729 Multiple Update Detection Support: N/A 00:24:50.729 Firmware Update Granularity: No Information Provided 00:24:50.729 Per-Namespace SMART Log: No 00:24:50.729 Asymmetric Namespace Access Log Page: Not Supported 00:24:50.729 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:50.729 Command Effects Log Page: Supported 00:24:50.729 Get Log Page Extended Data: Supported 00:24:50.729 Telemetry Log Pages: Not Supported 00:24:50.729 Persistent Event Log Pages: Not Supported 00:24:50.729 Supported Log Pages Log Page: May Support 00:24:50.729 Commands Supported & Effects Log Page: Not Supported 00:24:50.729 Feature Identifiers & Effects Log Page:May Support 00:24:50.729 NVMe-MI Commands & Effects Log Page: May Support 00:24:50.729 Data Area 4 for Telemetry Log: Not Supported 00:24:50.729 Error Log Page Entries Supported: 128 00:24:50.729 Keep Alive: Supported 00:24:50.729 Keep Alive Granularity: 10000 ms 00:24:50.729 00:24:50.729 NVM Command Set Attributes 00:24:50.729 ========================== 00:24:50.729 Submission Queue Entry Size 00:24:50.729 Max: 64 00:24:50.729 Min: 64 00:24:50.729 Completion Queue Entry Size 00:24:50.729 Max: 16 00:24:50.729 Min: 16 00:24:50.729 Number of Namespaces: 32 00:24:50.729 Compare Command: Supported 00:24:50.729 Write Uncorrectable Command: Not Supported 00:24:50.729 Dataset Management Command: Supported 00:24:50.729 Write Zeroes Command: Supported 00:24:50.729 Set Features Save Field: Not Supported 00:24:50.729 Reservations: Supported 00:24:50.729 Timestamp: Not Supported 00:24:50.729 Copy: Supported 00:24:50.729 Volatile Write Cache: Present 00:24:50.729 Atomic Write Unit (Normal): 1 00:24:50.729 Atomic Write Unit (PFail): 1 00:24:50.729 Atomic Compare & Write Unit: 1 00:24:50.729 Fused Compare & Write: Supported 00:24:50.729 Scatter-Gather List 00:24:50.729 SGL Command Set: Supported 00:24:50.729 SGL Keyed: Supported 00:24:50.729 SGL Bit Bucket Descriptor: Not Supported 00:24:50.729 SGL Metadata Pointer: Not Supported 00:24:50.729 Oversized SGL: Not Supported 00:24:50.729 SGL Metadata Address: Not Supported 00:24:50.729 SGL Offset: Supported 00:24:50.729 Transport SGL Data Block: Not Supported 00:24:50.729 Replay Protected Memory Block: Not Supported 00:24:50.729 00:24:50.729 Firmware Slot Information 00:24:50.729 ========================= 00:24:50.729 Active slot: 1 00:24:50.729 Slot 1 Firmware Revision: 24.01.1 00:24:50.729 00:24:50.729 00:24:50.729 Commands Supported and Effects 00:24:50.729 ============================== 00:24:50.729 Admin Commands 00:24:50.729 -------------- 00:24:50.729 Get Log Page (02h): Supported 00:24:50.729 Identify (06h): Supported 00:24:50.729 Abort (08h): Supported 00:24:50.729 Set Features (09h): Supported 00:24:50.729 Get Features (0Ah): Supported 00:24:50.729 Asynchronous Event Request (0Ch): Supported 00:24:50.729 Keep Alive (18h): Supported 00:24:50.729 I/O Commands 00:24:50.730 ------------ 00:24:50.730 Flush (00h): Supported LBA-Change 00:24:50.730 Write (01h): Supported LBA-Change 00:24:50.730 Read (02h): Supported 00:24:50.730 Compare (05h): Supported 00:24:50.730 Write Zeroes (08h): Supported LBA-Change 00:24:50.730 Dataset Management (09h): Supported LBA-Change 00:24:50.730 Copy (19h): Supported LBA-Change 00:24:50.730 Unknown (79h): Supported LBA-Change 00:24:50.730 Unknown (7Ah): Supported 00:24:50.730 00:24:50.730 Error Log 00:24:50.730 ========= 00:24:50.730 00:24:50.730 Arbitration 00:24:50.730 =========== 00:24:50.730 Arbitration Burst: 1 00:24:50.730 00:24:50.730 Power Management 00:24:50.730 ================ 00:24:50.730 Number of Power States: 1 00:24:50.730 Current Power State: Power State #0 00:24:50.730 Power State #0: 00:24:50.730 Max Power: 0.00 W 00:24:50.730 Non-Operational State: Operational 00:24:50.730 Entry Latency: Not Reported 00:24:50.730 Exit Latency: Not Reported 00:24:50.730 Relative Read Throughput: 0 00:24:50.730 Relative Read Latency: 0 00:24:50.730 Relative Write Throughput: 0 00:24:50.730 Relative Write Latency: 0 00:24:50.730 Idle Power: Not Reported 00:24:50.730 Active Power: Not Reported 00:24:50.730 Non-Operational Permissive Mode: Not Supported 00:24:50.730 00:24:50.730 Health Information 00:24:50.730 ================== 00:24:50.730 Critical Warnings: 00:24:50.730 Available Spare Space: OK 00:24:50.730 Temperature: OK 00:24:50.730 Device Reliability: OK 00:24:50.730 Read Only: No 00:24:50.730 Volatile Memory Backup: OK 00:24:50.730 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:50.730 Temperature Threshold: [2024-07-23 14:06:41.574781] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.574785] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.574788] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.574794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.574807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x22130d0, cid 7, qid 0 00:24:50.730 [2024-07-23 14:06:41.574952] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.574961] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.574964] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.574968] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x22130d0) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.574998] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:50.730 [2024-07-23 14:06:41.575009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.730 [2024-07-23 14:06:41.575016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.730 [2024-07-23 14:06:41.575024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.730 [2024-07-23 14:06:41.575029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.730 [2024-07-23 14:06:41.575037] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575040] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575051] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.575058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.575071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.575214] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.575223] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.575226] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575229] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.575237] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575244] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.575250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.575266] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.575409] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.575418] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.575421] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575424] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.575429] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:50.730 [2024-07-23 14:06:41.575433] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:50.730 [2024-07-23 14:06:41.575443] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575447] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.575457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.575469] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.575602] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.575611] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.575614] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575618] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.575629] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575633] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575636] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.575642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.575657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.575791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.575800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.575803] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575807] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.575817] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575821] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575824] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.575830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.575842] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.575979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.575988] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.575991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.575995] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.576006] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.576019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.576031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.576170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.576179] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.576182] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576186] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.576197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576201] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576204] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.730 [2024-07-23 14:06:41.576210] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.730 [2024-07-23 14:06:41.576222] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.730 [2024-07-23 14:06:41.576359] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.730 [2024-07-23 14:06:41.576367] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.730 [2024-07-23 14:06:41.576371] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576374] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.730 [2024-07-23 14:06:41.576385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.730 [2024-07-23 14:06:41.576389] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.731 [2024-07-23 14:06:41.576398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.731 [2024-07-23 14:06:41.576410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.731 [2024-07-23 14:06:41.576543] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.731 [2024-07-23 14:06:41.576553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.731 [2024-07-23 14:06:41.576556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576559] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.731 [2024-07-23 14:06:41.576570] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576574] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576577] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.731 [2024-07-23 14:06:41.576583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.731 [2024-07-23 14:06:41.576595] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.731 [2024-07-23 14:06:41.576732] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.731 [2024-07-23 14:06:41.576741] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.731 [2024-07-23 14:06:41.576744] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576748] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.731 [2024-07-23 14:06:41.576758] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576762] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576765] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.731 [2024-07-23 14:06:41.576771] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.731 [2024-07-23 14:06:41.576783] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.731 [2024-07-23 14:06:41.576916] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.731 [2024-07-23 14:06:41.576924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.731 [2024-07-23 14:06:41.576927] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576931] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.731 [2024-07-23 14:06:41.576942] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576945] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.576948] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.731 [2024-07-23 14:06:41.576955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.731 [2024-07-23 14:06:41.576966] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.731 [2024-07-23 14:06:41.581053] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.731 [2024-07-23 14:06:41.581065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.731 [2024-07-23 14:06:41.581068] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.581072] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.731 [2024-07-23 14:06:41.581084] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.581088] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.581091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x21aa9e0) 00:24:50.731 [2024-07-23 14:06:41.581097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.731 [2024-07-23 14:06:41.581110] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2212b50, cid 3, qid 0 00:24:50.731 [2024-07-23 14:06:41.581242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:50.731 [2024-07-23 14:06:41.581254] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:50.731 [2024-07-23 14:06:41.581257] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:50.731 [2024-07-23 14:06:41.581261] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x2212b50) on tqpair=0x21aa9e0 00:24:50.731 [2024-07-23 14:06:41.581270] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:24:50.731 0 Kelvin (-273 Celsius) 00:24:50.731 Available Spare: 0% 00:24:50.731 Available Spare Threshold: 0% 00:24:50.731 Life Percentage Used: 0% 00:24:50.731 Data Units Read: 0 00:24:50.731 Data Units Written: 0 00:24:50.731 Host Read Commands: 0 00:24:50.731 Host Write Commands: 0 00:24:50.731 Controller Busy Time: 0 minutes 00:24:50.731 Power Cycles: 0 00:24:50.731 Power On Hours: 0 hours 00:24:50.731 Unsafe Shutdowns: 0 00:24:50.731 Unrecoverable Media Errors: 0 00:24:50.731 Lifetime Error Log Entries: 0 00:24:50.731 Warning Temperature Time: 0 minutes 00:24:50.731 Critical Temperature Time: 0 minutes 00:24:50.731 00:24:50.731 Number of Queues 00:24:50.731 ================ 00:24:50.731 Number of I/O Submission Queues: 127 00:24:50.731 Number of I/O Completion Queues: 127 00:24:50.731 00:24:50.731 Active Namespaces 00:24:50.731 ================= 00:24:50.731 Namespace ID:1 00:24:50.731 Error Recovery Timeout: Unlimited 00:24:50.731 Command Set Identifier: NVM (00h) 00:24:50.731 Deallocate: Supported 00:24:50.731 Deallocated/Unwritten Error: Not Supported 00:24:50.731 Deallocated Read Value: Unknown 00:24:50.731 Deallocate in Write Zeroes: Not Supported 00:24:50.731 Deallocated Guard Field: 0xFFFF 00:24:50.731 Flush: Supported 00:24:50.731 Reservation: Supported 00:24:50.731 Namespace Sharing Capabilities: Multiple Controllers 00:24:50.731 Size (in LBAs): 131072 (0GiB) 00:24:50.731 Capacity (in LBAs): 131072 (0GiB) 00:24:50.731 Utilization (in LBAs): 131072 (0GiB) 00:24:50.731 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:50.731 EUI64: ABCDEF0123456789 00:24:50.731 UUID: b5644b93-7d72-4d5f-bcac-d2cd5ab30106 00:24:50.731 Thin Provisioning: Not Supported 00:24:50.731 Per-NS Atomic Units: Yes 00:24:50.731 Atomic Boundary Size (Normal): 0 00:24:50.731 Atomic Boundary Size (PFail): 0 00:24:50.731 Atomic Boundary Offset: 0 00:24:50.731 Maximum Single Source Range Length: 65535 00:24:50.731 Maximum Copy Length: 65535 00:24:50.731 Maximum Source Range Count: 1 00:24:50.731 NGUID/EUI64 Never Reused: No 00:24:50.731 Namespace Write Protected: No 00:24:50.731 Number of LBA Formats: 1 00:24:50.731 Current LBA Format: LBA Format #00 00:24:50.731 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:50.731 00:24:50.731 14:06:41 -- host/identify.sh@51 -- # sync 00:24:50.731 14:06:41 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:50.731 14:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:50.731 14:06:41 -- common/autotest_common.sh@10 -- # set +x 00:24:50.731 14:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:50.731 14:06:41 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:50.731 14:06:41 -- host/identify.sh@56 -- # nvmftestfini 00:24:50.731 14:06:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:50.731 14:06:41 -- nvmf/common.sh@116 -- # sync 00:24:50.731 14:06:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:50.731 14:06:41 -- nvmf/common.sh@119 -- # set +e 00:24:50.731 14:06:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:50.731 14:06:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:50.731 rmmod nvme_tcp 00:24:50.731 rmmod nvme_fabrics 00:24:50.731 rmmod nvme_keyring 00:24:50.731 14:06:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:50.731 14:06:41 -- nvmf/common.sh@123 -- # set -e 00:24:50.731 14:06:41 -- nvmf/common.sh@124 -- # return 0 00:24:50.731 14:06:41 -- nvmf/common.sh@477 -- # '[' -n 3370134 ']' 00:24:50.731 14:06:41 -- nvmf/common.sh@478 -- # killprocess 3370134 00:24:50.731 14:06:41 -- common/autotest_common.sh@926 -- # '[' -z 3370134 ']' 00:24:50.731 14:06:41 -- common/autotest_common.sh@930 -- # kill -0 3370134 00:24:50.731 14:06:41 -- common/autotest_common.sh@931 -- # uname 00:24:50.731 14:06:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:50.731 14:06:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3370134 00:24:50.731 14:06:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:50.731 14:06:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:50.731 14:06:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3370134' 00:24:50.731 killing process with pid 3370134 00:24:50.731 14:06:41 -- common/autotest_common.sh@945 -- # kill 3370134 00:24:50.731 [2024-07-23 14:06:41.710618] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:50.731 14:06:41 -- common/autotest_common.sh@950 -- # wait 3370134 00:24:50.991 14:06:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:50.991 14:06:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:50.991 14:06:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:50.991 14:06:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:50.991 14:06:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:50.991 14:06:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.991 14:06:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:50.991 14:06:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.531 14:06:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:53.531 00:24:53.531 real 0m8.883s 00:24:53.531 user 0m7.134s 00:24:53.531 sys 0m4.221s 00:24:53.531 14:06:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.531 14:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:53.531 ************************************ 00:24:53.531 END TEST nvmf_identify 00:24:53.531 ************************************ 00:24:53.531 14:06:44 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:53.531 14:06:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:53.531 14:06:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.531 14:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:53.531 ************************************ 00:24:53.531 START TEST nvmf_perf 00:24:53.531 ************************************ 00:24:53.531 14:06:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:53.531 * Looking for test storage... 00:24:53.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.531 14:06:44 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.531 14:06:44 -- nvmf/common.sh@7 -- # uname -s 00:24:53.531 14:06:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.531 14:06:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.531 14:06:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.531 14:06:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.531 14:06:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.531 14:06:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.531 14:06:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.531 14:06:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.531 14:06:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.531 14:06:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.531 14:06:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.531 14:06:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:53.531 14:06:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.531 14:06:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.531 14:06:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.531 14:06:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.531 14:06:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.531 14:06:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.531 14:06:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.531 14:06:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.531 14:06:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.531 14:06:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.531 14:06:44 -- paths/export.sh@5 -- # export PATH 00:24:53.531 14:06:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.531 14:06:44 -- nvmf/common.sh@46 -- # : 0 00:24:53.531 14:06:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:53.531 14:06:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:53.531 14:06:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:53.531 14:06:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.531 14:06:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.531 14:06:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:53.531 14:06:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:53.531 14:06:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:53.531 14:06:44 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:53.531 14:06:44 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:53.531 14:06:44 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:53.531 14:06:44 -- host/perf.sh@17 -- # nvmftestinit 00:24:53.531 14:06:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:53.531 14:06:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.531 14:06:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:53.531 14:06:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:53.531 14:06:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:53.531 14:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.531 14:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.531 14:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.531 14:06:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:53.531 14:06:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:53.531 14:06:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:53.531 14:06:44 -- common/autotest_common.sh@10 -- # set +x 00:24:58.885 14:06:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:58.885 14:06:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:58.885 14:06:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:58.885 14:06:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:58.885 14:06:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:58.885 14:06:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:58.885 14:06:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:58.885 14:06:49 -- nvmf/common.sh@294 -- # net_devs=() 00:24:58.885 14:06:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:58.885 14:06:49 -- nvmf/common.sh@295 -- # e810=() 00:24:58.885 14:06:49 -- nvmf/common.sh@295 -- # local -ga e810 00:24:58.885 14:06:49 -- nvmf/common.sh@296 -- # x722=() 00:24:58.885 14:06:49 -- nvmf/common.sh@296 -- # local -ga x722 00:24:58.885 14:06:49 -- nvmf/common.sh@297 -- # mlx=() 00:24:58.885 14:06:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:58.885 14:06:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.885 14:06:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:58.885 14:06:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:58.885 14:06:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:58.885 14:06:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:58.885 14:06:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:58.885 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:58.885 14:06:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:58.885 14:06:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:58.885 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:58.885 14:06:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:58.885 14:06:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:58.885 14:06:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.885 14:06:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:58.885 14:06:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.885 14:06:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:58.885 Found net devices under 0000:86:00.0: cvl_0_0 00:24:58.885 14:06:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.885 14:06:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:58.885 14:06:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.885 14:06:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:58.885 14:06:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.885 14:06:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:58.885 Found net devices under 0000:86:00.1: cvl_0_1 00:24:58.885 14:06:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.885 14:06:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:58.885 14:06:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:58.885 14:06:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:58.885 14:06:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.885 14:06:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.885 14:06:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.885 14:06:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:58.885 14:06:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.885 14:06:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.885 14:06:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:58.885 14:06:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.885 14:06:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.885 14:06:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:58.885 14:06:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:58.885 14:06:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.885 14:06:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.885 14:06:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.885 14:06:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.885 14:06:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:58.885 14:06:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.885 14:06:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.885 14:06:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.885 14:06:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:58.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:24:58.885 00:24:58.885 --- 10.0.0.2 ping statistics --- 00:24:58.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.885 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:24:58.885 14:06:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:24:58.885 00:24:58.885 --- 10.0.0.1 ping statistics --- 00:24:58.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.885 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:24:58.885 14:06:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.885 14:06:49 -- nvmf/common.sh@410 -- # return 0 00:24:58.885 14:06:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:58.885 14:06:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.885 14:06:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:58.885 14:06:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.885 14:06:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:58.885 14:06:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:58.885 14:06:49 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:58.885 14:06:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:58.885 14:06:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:58.885 14:06:49 -- common/autotest_common.sh@10 -- # set +x 00:24:58.885 14:06:49 -- nvmf/common.sh@469 -- # nvmfpid=3373724 00:24:58.885 14:06:49 -- nvmf/common.sh@470 -- # waitforlisten 3373724 00:24:58.885 14:06:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.885 14:06:49 -- common/autotest_common.sh@819 -- # '[' -z 3373724 ']' 00:24:58.885 14:06:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.886 14:06:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:58.886 14:06:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.886 14:06:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:58.886 14:06:49 -- common/autotest_common.sh@10 -- # set +x 00:24:58.886 [2024-07-23 14:06:49.446889] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:24:58.886 [2024-07-23 14:06:49.446930] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.886 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.886 [2024-07-23 14:06:49.503981] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.886 [2024-07-23 14:06:49.581962] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:58.886 [2024-07-23 14:06:49.582076] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.886 [2024-07-23 14:06:49.582084] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.886 [2024-07-23 14:06:49.582091] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.886 [2024-07-23 14:06:49.582139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.886 [2024-07-23 14:06:49.582234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.886 [2024-07-23 14:06:49.582296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.886 [2024-07-23 14:06:49.582297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.455 14:06:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:59.455 14:06:50 -- common/autotest_common.sh@852 -- # return 0 00:24:59.455 14:06:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:59.455 14:06:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:59.455 14:06:50 -- common/autotest_common.sh@10 -- # set +x 00:24:59.455 14:06:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.455 14:06:50 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:59.455 14:06:50 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:02.750 14:06:53 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:02.750 14:06:53 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:02.750 14:06:53 -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:02.750 14:06:53 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:02.750 14:06:53 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:02.750 14:06:53 -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:02.750 14:06:53 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:02.750 14:06:53 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:02.750 14:06:53 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:03.009 [2024-07-23 14:06:53.815892] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.009 14:06:53 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:03.009 14:06:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:03.009 14:06:54 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.269 14:06:54 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:03.269 14:06:54 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:03.528 14:06:54 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.788 [2024-07-23 14:06:54.550786] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.788 14:06:54 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:03.788 14:06:54 -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:03.788 14:06:54 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:03.788 14:06:54 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:03.788 14:06:54 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:05.166 Initializing NVMe Controllers 00:25:05.166 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:05.166 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:05.166 Initialization complete. Launching workers. 00:25:05.166 ======================================================== 00:25:05.166 Latency(us) 00:25:05.166 Device Information : IOPS MiB/s Average min max 00:25:05.166 PCIE (0000:5e:00.0) NSID 1 from core 0: 99297.94 387.88 321.95 25.17 8196.08 00:25:05.166 ======================================================== 00:25:05.166 Total : 99297.94 387.88 321.95 25.17 8196.08 00:25:05.166 00:25:05.166 14:06:55 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:05.166 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.546 Initializing NVMe Controllers 00:25:06.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:06.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:06.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:06.546 Initialization complete. Launching workers. 00:25:06.546 ======================================================== 00:25:06.546 Latency(us) 00:25:06.546 Device Information : IOPS MiB/s Average min max 00:25:06.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 59.00 0.23 17216.71 473.55 45473.09 00:25:06.546 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 63.00 0.25 16213.66 7961.48 47897.17 00:25:06.546 ======================================================== 00:25:06.546 Total : 122.00 0.48 16698.74 473.55 47897.17 00:25:06.546 00:25:06.546 14:06:57 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.546 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.925 Initializing NVMe Controllers 00:25:07.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:07.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:07.925 Initialization complete. Launching workers. 00:25:07.925 ======================================================== 00:25:07.925 Latency(us) 00:25:07.925 Device Information : IOPS MiB/s Average min max 00:25:07.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8012.43 31.30 4008.78 762.17 45341.26 00:25:07.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3907.58 15.26 8203.07 2397.02 15960.11 00:25:07.925 ======================================================== 00:25:07.925 Total : 11920.00 46.56 5383.74 762.17 45341.26 00:25:07.925 00:25:07.925 14:06:58 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:07.925 14:06:58 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:07.925 14:06:58 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:07.925 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.463 Initializing NVMe Controllers 00:25:10.463 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.463 Controller IO queue size 128, less than required. 00:25:10.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.463 Controller IO queue size 128, less than required. 00:25:10.463 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:10.463 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:10.463 Initialization complete. Launching workers. 00:25:10.463 ======================================================== 00:25:10.463 Latency(us) 00:25:10.463 Device Information : IOPS MiB/s Average min max 00:25:10.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 905.93 226.48 145262.14 81221.89 238298.85 00:25:10.463 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 558.34 139.59 238653.15 103226.91 366671.07 00:25:10.463 ======================================================== 00:25:10.463 Total : 1464.27 366.07 180873.04 81221.89 366671.07 00:25:10.463 00:25:10.463 14:07:01 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:10.463 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.463 No valid NVMe controllers or AIO or URING devices found 00:25:10.464 Initializing NVMe Controllers 00:25:10.464 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:10.464 Controller IO queue size 128, less than required. 00:25:10.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.464 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:10.464 Controller IO queue size 128, less than required. 00:25:10.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:10.464 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:10.464 WARNING: Some requested NVMe devices were skipped 00:25:10.464 14:07:01 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:10.464 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.002 Initializing NVMe Controllers 00:25:13.002 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:13.002 Controller IO queue size 128, less than required. 00:25:13.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:13.002 Controller IO queue size 128, less than required. 00:25:13.002 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:13.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:13.002 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:13.002 Initialization complete. Launching workers. 00:25:13.002 00:25:13.002 ==================== 00:25:13.002 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:13.002 TCP transport: 00:25:13.002 polls: 51244 00:25:13.002 idle_polls: 18155 00:25:13.002 sock_completions: 33089 00:25:13.002 nvme_completions: 3280 00:25:13.002 submitted_requests: 5118 00:25:13.002 queued_requests: 1 00:25:13.002 00:25:13.002 ==================== 00:25:13.002 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:13.002 TCP transport: 00:25:13.002 polls: 51558 00:25:13.002 idle_polls: 16714 00:25:13.002 sock_completions: 34844 00:25:13.002 nvme_completions: 3202 00:25:13.002 submitted_requests: 4937 00:25:13.002 queued_requests: 1 00:25:13.002 ======================================================== 00:25:13.002 Latency(us) 00:25:13.002 Device Information : IOPS MiB/s Average min max 00:25:13.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 883.50 220.87 149290.11 80349.68 234897.55 00:25:13.002 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 864.00 216.00 150855.54 84827.82 229985.57 00:25:13.002 ======================================================== 00:25:13.002 Total : 1747.49 436.87 150064.09 80349.68 234897.55 00:25:13.002 00:25:13.002 14:07:03 -- host/perf.sh@66 -- # sync 00:25:13.002 14:07:03 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.002 14:07:03 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:13.002 14:07:03 -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:25:13.002 14:07:03 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:16.289 14:07:07 -- host/perf.sh@72 -- # ls_guid=13ef269d-e7e5-4899-8ff9-d26fdbe36a63 00:25:16.289 14:07:07 -- host/perf.sh@73 -- # get_lvs_free_mb 13ef269d-e7e5-4899-8ff9-d26fdbe36a63 00:25:16.289 14:07:07 -- common/autotest_common.sh@1343 -- # local lvs_uuid=13ef269d-e7e5-4899-8ff9-d26fdbe36a63 00:25:16.289 14:07:07 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:16.289 14:07:07 -- common/autotest_common.sh@1345 -- # local fc 00:25:16.289 14:07:07 -- common/autotest_common.sh@1346 -- # local cs 00:25:16.289 14:07:07 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:16.546 14:07:07 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:16.546 { 00:25:16.546 "uuid": "13ef269d-e7e5-4899-8ff9-d26fdbe36a63", 00:25:16.546 "name": "lvs_0", 00:25:16.546 "base_bdev": "Nvme0n1", 00:25:16.546 "total_data_clusters": 238234, 00:25:16.546 "free_clusters": 238234, 00:25:16.546 "block_size": 512, 00:25:16.546 "cluster_size": 4194304 00:25:16.546 } 00:25:16.546 ]' 00:25:16.546 14:07:07 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="13ef269d-e7e5-4899-8ff9-d26fdbe36a63") .free_clusters' 00:25:16.546 14:07:07 -- common/autotest_common.sh@1348 -- # fc=238234 00:25:16.546 14:07:07 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="13ef269d-e7e5-4899-8ff9-d26fdbe36a63") .cluster_size' 00:25:16.546 14:07:07 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:16.546 14:07:07 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:25:16.546 14:07:07 -- common/autotest_common.sh@1353 -- # echo 952936 00:25:16.546 952936 00:25:16.546 14:07:07 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:25:16.546 14:07:07 -- host/perf.sh@78 -- # free_mb=20480 00:25:16.546 14:07:07 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 13ef269d-e7e5-4899-8ff9-d26fdbe36a63 lbd_0 20480 00:25:17.112 14:07:07 -- host/perf.sh@80 -- # lb_guid=f67cd445-c19f-48a1-a416-288eb7750914 00:25:17.112 14:07:07 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore f67cd445-c19f-48a1-a416-288eb7750914 lvs_n_0 00:25:17.679 14:07:08 -- host/perf.sh@83 -- # ls_nested_guid=1b343e5e-239e-49d1-b141-ae531b905b06 00:25:17.679 14:07:08 -- host/perf.sh@84 -- # get_lvs_free_mb 1b343e5e-239e-49d1-b141-ae531b905b06 00:25:17.679 14:07:08 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1b343e5e-239e-49d1-b141-ae531b905b06 00:25:17.679 14:07:08 -- common/autotest_common.sh@1344 -- # local lvs_info 00:25:17.679 14:07:08 -- common/autotest_common.sh@1345 -- # local fc 00:25:17.679 14:07:08 -- common/autotest_common.sh@1346 -- # local cs 00:25:17.679 14:07:08 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:17.936 14:07:08 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:25:17.936 { 00:25:17.936 "uuid": "13ef269d-e7e5-4899-8ff9-d26fdbe36a63", 00:25:17.936 "name": "lvs_0", 00:25:17.936 "base_bdev": "Nvme0n1", 00:25:17.936 "total_data_clusters": 238234, 00:25:17.936 "free_clusters": 233114, 00:25:17.936 "block_size": 512, 00:25:17.936 "cluster_size": 4194304 00:25:17.936 }, 00:25:17.936 { 00:25:17.936 "uuid": "1b343e5e-239e-49d1-b141-ae531b905b06", 00:25:17.936 "name": "lvs_n_0", 00:25:17.936 "base_bdev": "f67cd445-c19f-48a1-a416-288eb7750914", 00:25:17.936 "total_data_clusters": 5114, 00:25:17.936 "free_clusters": 5114, 00:25:17.936 "block_size": 512, 00:25:17.936 "cluster_size": 4194304 00:25:17.936 } 00:25:17.936 ]' 00:25:17.936 14:07:08 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1b343e5e-239e-49d1-b141-ae531b905b06") .free_clusters' 00:25:17.936 14:07:08 -- common/autotest_common.sh@1348 -- # fc=5114 00:25:17.936 14:07:08 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1b343e5e-239e-49d1-b141-ae531b905b06") .cluster_size' 00:25:17.936 14:07:08 -- common/autotest_common.sh@1349 -- # cs=4194304 00:25:17.936 14:07:08 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:25:17.936 14:07:08 -- common/autotest_common.sh@1353 -- # echo 20456 00:25:17.936 20456 00:25:17.936 14:07:08 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:17.936 14:07:08 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1b343e5e-239e-49d1-b141-ae531b905b06 lbd_nest_0 20456 00:25:18.195 14:07:09 -- host/perf.sh@88 -- # lb_nested_guid=98975c21-ae1b-41d4-99f6-6ca17fe9c45e 00:25:18.195 14:07:09 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:18.195 14:07:09 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:18.195 14:07:09 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 98975c21-ae1b-41d4-99f6-6ca17fe9c45e 00:25:18.452 14:07:09 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:18.710 14:07:09 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:18.710 14:07:09 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:18.710 14:07:09 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:18.710 14:07:09 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:18.710 14:07:09 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:18.710 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.019 Initializing NVMe Controllers 00:25:31.019 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.019 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:31.019 Initialization complete. Launching workers. 00:25:31.019 ======================================================== 00:25:31.019 Latency(us) 00:25:31.019 Device Information : IOPS MiB/s Average min max 00:25:31.019 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.58 0.02 21534.93 335.62 45486.81 00:25:31.019 ======================================================== 00:25:31.019 Total : 46.58 0.02 21534.93 335.62 45486.81 00:25:31.019 00:25:31.019 14:07:20 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:31.019 14:07:20 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:31.019 EAL: No free 2048 kB hugepages reported on node 1 00:25:41.005 Initializing NVMe Controllers 00:25:41.005 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:41.005 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:41.005 Initialization complete. Launching workers. 00:25:41.005 ======================================================== 00:25:41.005 Latency(us) 00:25:41.005 Device Information : IOPS MiB/s Average min max 00:25:41.005 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 82.98 10.37 12070.06 6053.73 47836.64 00:25:41.005 ======================================================== 00:25:41.005 Total : 82.98 10.37 12070.06 6053.73 47836.64 00:25:41.005 00:25:41.005 14:07:30 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:41.005 14:07:30 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:41.005 14:07:30 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:41.005 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.987 Initializing NVMe Controllers 00:25:50.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:50.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:50.987 Initialization complete. Launching workers. 00:25:50.987 ======================================================== 00:25:50.987 Latency(us) 00:25:50.987 Device Information : IOPS MiB/s Average min max 00:25:50.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7257.82 3.54 4408.47 471.98 12109.71 00:25:50.987 ======================================================== 00:25:50.988 Total : 7257.82 3.54 4408.47 471.98 12109.71 00:25:50.988 00:25:50.988 14:07:40 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:50.988 14:07:40 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:50.988 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.968 Initializing NVMe Controllers 00:26:00.968 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.968 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:00.968 Initialization complete. Launching workers. 00:26:00.968 ======================================================== 00:26:00.968 Latency(us) 00:26:00.968 Device Information : IOPS MiB/s Average min max 00:26:00.968 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1611.99 201.50 19864.95 1170.72 58025.19 00:26:00.968 ======================================================== 00:26:00.969 Total : 1611.99 201.50 19864.95 1170.72 58025.19 00:26:00.969 00:26:00.969 14:07:50 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:00.969 14:07:50 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:00.969 14:07:50 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:00.969 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.949 Initializing NVMe Controllers 00:26:10.949 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:10.949 Controller IO queue size 128, less than required. 00:26:10.949 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:10.949 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:10.949 Initialization complete. Launching workers. 00:26:10.949 ======================================================== 00:26:10.949 Latency(us) 00:26:10.949 Device Information : IOPS MiB/s Average min max 00:26:10.949 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14777.29 7.22 8665.27 1339.45 22838.74 00:26:10.949 ======================================================== 00:26:10.949 Total : 14777.29 7.22 8665.27 1339.45 22838.74 00:26:10.949 00:26:10.949 14:08:01 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:10.949 14:08:01 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:10.949 EAL: No free 2048 kB hugepages reported on node 1 00:26:20.999 Initializing NVMe Controllers 00:26:20.999 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:20.999 Controller IO queue size 128, less than required. 00:26:20.999 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:20.999 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:20.999 Initialization complete. Launching workers. 00:26:20.999 ======================================================== 00:26:20.999 Latency(us) 00:26:20.999 Device Information : IOPS MiB/s Average min max 00:26:20.999 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1170.51 146.31 109883.70 14432.28 234533.14 00:26:20.999 ======================================================== 00:26:20.999 Total : 1170.51 146.31 109883.70 14432.28 234533.14 00:26:20.999 00:26:20.999 14:08:11 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.999 14:08:11 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 98975c21-ae1b-41d4-99f6-6ca17fe9c45e 00:26:21.567 14:08:12 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:21.567 14:08:12 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f67cd445-c19f-48a1-a416-288eb7750914 00:26:21.826 14:08:12 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:26:22.086 14:08:12 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:22.086 14:08:12 -- host/perf.sh@114 -- # nvmftestfini 00:26:22.086 14:08:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:22.086 14:08:12 -- nvmf/common.sh@116 -- # sync 00:26:22.086 14:08:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:22.086 14:08:12 -- nvmf/common.sh@119 -- # set +e 00:26:22.086 14:08:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:22.086 14:08:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:22.086 rmmod nvme_tcp 00:26:22.086 rmmod nvme_fabrics 00:26:22.086 rmmod nvme_keyring 00:26:22.086 14:08:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:22.086 14:08:13 -- nvmf/common.sh@123 -- # set -e 00:26:22.086 14:08:13 -- nvmf/common.sh@124 -- # return 0 00:26:22.086 14:08:13 -- nvmf/common.sh@477 -- # '[' -n 3373724 ']' 00:26:22.086 14:08:13 -- nvmf/common.sh@478 -- # killprocess 3373724 00:26:22.086 14:08:13 -- common/autotest_common.sh@926 -- # '[' -z 3373724 ']' 00:26:22.086 14:08:13 -- common/autotest_common.sh@930 -- # kill -0 3373724 00:26:22.086 14:08:13 -- common/autotest_common.sh@931 -- # uname 00:26:22.086 14:08:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:22.086 14:08:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3373724 00:26:22.086 14:08:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:22.086 14:08:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:22.086 14:08:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3373724' 00:26:22.086 killing process with pid 3373724 00:26:22.086 14:08:13 -- common/autotest_common.sh@945 -- # kill 3373724 00:26:22.086 14:08:13 -- common/autotest_common.sh@950 -- # wait 3373724 00:26:23.992 14:08:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:23.992 14:08:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:23.992 14:08:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:23.992 14:08:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.992 14:08:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:23.992 14:08:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.992 14:08:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.992 14:08:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.900 14:08:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:25.900 00:26:25.900 real 1m32.645s 00:26:25.900 user 5m35.207s 00:26:25.900 sys 0m13.353s 00:26:25.900 14:08:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:25.900 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:26:25.900 ************************************ 00:26:25.900 END TEST nvmf_perf 00:26:25.900 ************************************ 00:26:25.900 14:08:16 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:25.900 14:08:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:25.900 14:08:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:25.900 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:26:25.900 ************************************ 00:26:25.900 START TEST nvmf_fio_host 00:26:25.900 ************************************ 00:26:25.900 14:08:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:25.900 * Looking for test storage... 00:26:25.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:25.900 14:08:16 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.900 14:08:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.900 14:08:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.900 14:08:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.900 14:08:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- paths/export.sh@5 -- # export PATH 00:26:25.900 14:08:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.900 14:08:16 -- nvmf/common.sh@7 -- # uname -s 00:26:25.900 14:08:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.900 14:08:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.900 14:08:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.900 14:08:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.900 14:08:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.900 14:08:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.900 14:08:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.900 14:08:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.900 14:08:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.900 14:08:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.900 14:08:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.900 14:08:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:25.900 14:08:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.900 14:08:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.900 14:08:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:25.900 14:08:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:25.900 14:08:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.900 14:08:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.900 14:08:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.900 14:08:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- paths/export.sh@5 -- # export PATH 00:26:25.900 14:08:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.900 14:08:16 -- nvmf/common.sh@46 -- # : 0 00:26:25.900 14:08:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:25.900 14:08:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:25.900 14:08:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:25.900 14:08:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.900 14:08:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.900 14:08:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:25.900 14:08:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:25.900 14:08:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:25.900 14:08:16 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:25.900 14:08:16 -- host/fio.sh@14 -- # nvmftestinit 00:26:25.900 14:08:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:25.900 14:08:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.900 14:08:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:25.900 14:08:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:25.900 14:08:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:25.900 14:08:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.900 14:08:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:25.900 14:08:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.900 14:08:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:25.900 14:08:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:25.900 14:08:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:25.900 14:08:16 -- common/autotest_common.sh@10 -- # set +x 00:26:31.176 14:08:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:31.176 14:08:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:31.176 14:08:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:31.176 14:08:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:31.176 14:08:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:31.176 14:08:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:31.176 14:08:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:31.176 14:08:21 -- nvmf/common.sh@294 -- # net_devs=() 00:26:31.176 14:08:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:31.176 14:08:21 -- nvmf/common.sh@295 -- # e810=() 00:26:31.176 14:08:21 -- nvmf/common.sh@295 -- # local -ga e810 00:26:31.176 14:08:21 -- nvmf/common.sh@296 -- # x722=() 00:26:31.176 14:08:21 -- nvmf/common.sh@296 -- # local -ga x722 00:26:31.176 14:08:21 -- nvmf/common.sh@297 -- # mlx=() 00:26:31.176 14:08:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:31.176 14:08:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.176 14:08:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.176 14:08:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.177 14:08:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:31.177 14:08:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:31.177 14:08:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:31.177 14:08:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:31.177 14:08:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:31.177 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:31.177 14:08:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:31.177 14:08:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:31.177 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:31.177 14:08:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:31.177 14:08:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:31.177 14:08:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.177 14:08:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:31.177 14:08:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.177 14:08:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:31.177 Found net devices under 0000:86:00.0: cvl_0_0 00:26:31.177 14:08:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.177 14:08:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:31.177 14:08:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.177 14:08:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:31.177 14:08:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.177 14:08:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:31.177 Found net devices under 0000:86:00.1: cvl_0_1 00:26:31.177 14:08:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.177 14:08:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:31.177 14:08:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:31.177 14:08:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:31.177 14:08:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.177 14:08:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.177 14:08:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.177 14:08:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:31.177 14:08:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.177 14:08:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.177 14:08:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:31.177 14:08:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.177 14:08:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.177 14:08:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:31.177 14:08:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:31.177 14:08:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.177 14:08:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.177 14:08:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.177 14:08:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.177 14:08:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:31.177 14:08:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.177 14:08:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.177 14:08:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.177 14:08:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:31.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:26:31.177 00:26:31.177 --- 10.0.0.2 ping statistics --- 00:26:31.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.177 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:26:31.177 14:08:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:26:31.177 00:26:31.177 --- 10.0.0.1 ping statistics --- 00:26:31.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.177 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:26:31.177 14:08:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.177 14:08:21 -- nvmf/common.sh@410 -- # return 0 00:26:31.177 14:08:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:31.177 14:08:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.177 14:08:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:31.177 14:08:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.177 14:08:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:31.177 14:08:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:31.177 14:08:22 -- host/fio.sh@16 -- # [[ y != y ]] 00:26:31.177 14:08:22 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:26:31.177 14:08:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:31.177 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:26:31.177 14:08:22 -- host/fio.sh@24 -- # nvmfpid=3391064 00:26:31.177 14:08:22 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.177 14:08:22 -- host/fio.sh@28 -- # waitforlisten 3391064 00:26:31.177 14:08:22 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:31.177 14:08:22 -- common/autotest_common.sh@819 -- # '[' -z 3391064 ']' 00:26:31.177 14:08:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.177 14:08:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:31.177 14:08:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.177 14:08:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:31.177 14:08:22 -- common/autotest_common.sh@10 -- # set +x 00:26:31.177 [2024-07-23 14:08:22.046496] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:26:31.177 [2024-07-23 14:08:22.046535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.177 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.177 [2024-07-23 14:08:22.102984] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:31.177 [2024-07-23 14:08:22.173539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:31.177 [2024-07-23 14:08:22.173647] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.177 [2024-07-23 14:08:22.173658] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.177 [2024-07-23 14:08:22.173664] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.177 [2024-07-23 14:08:22.173709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.177 [2024-07-23 14:08:22.173726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:31.177 [2024-07-23 14:08:22.173815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:31.177 [2024-07-23 14:08:22.173816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.115 14:08:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:32.115 14:08:22 -- common/autotest_common.sh@852 -- # return 0 00:26:32.115 14:08:22 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:32.115 [2024-07-23 14:08:23.005783] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.115 14:08:23 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:26:32.115 14:08:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:32.115 14:08:23 -- common/autotest_common.sh@10 -- # set +x 00:26:32.115 14:08:23 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:26:32.374 Malloc1 00:26:32.374 14:08:23 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:32.633 14:08:23 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:32.633 14:08:23 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.892 [2024-07-23 14:08:23.755862] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.892 14:08:23 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:33.152 14:08:23 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:33.152 14:08:23 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:33.152 14:08:23 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:33.152 14:08:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:33.152 14:08:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:33.152 14:08:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:33.152 14:08:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.152 14:08:23 -- common/autotest_common.sh@1320 -- # shift 00:26:33.152 14:08:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:33.152 14:08:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:33.152 14:08:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:33.152 14:08:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:33.152 14:08:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:33.152 14:08:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:33.152 14:08:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:33.152 14:08:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:33.411 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:33.411 fio-3.35 00:26:33.411 Starting 1 thread 00:26:33.411 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.947 00:26:35.947 test: (groupid=0, jobs=1): err= 0: pid=3391591: Tue Jul 23 14:08:26 2024 00:26:35.947 read: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.5MiB/2004msec) 00:26:35.947 slat (nsec): min=1568, max=238563, avg=1729.36, stdev=2240.67 00:26:35.947 clat (usec): min=3240, max=17165, avg=6270.17, stdev=1475.28 00:26:35.947 lat (usec): min=3241, max=17167, avg=6271.90, stdev=1475.41 00:26:35.947 clat percentiles (usec): 00:26:35.947 | 1.00th=[ 4146], 5.00th=[ 4752], 10.00th=[ 5080], 20.00th=[ 5342], 00:26:35.947 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 6063], 00:26:35.947 | 70.00th=[ 6325], 80.00th=[ 6783], 90.00th=[ 8029], 95.00th=[ 9372], 00:26:35.947 | 99.00th=[11863], 99.50th=[12649], 99.90th=[15401], 99.95th=[16057], 00:26:35.947 | 99.99th=[17171] 00:26:35.947 bw ( KiB/s): min=45328, max=48312, per=99.84%, avg=47214.00, stdev=1303.16, samples=4 00:26:35.947 iops : min=11332, max=12078, avg=11803.50, stdev=325.79, samples=4 00:26:35.947 write: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.1MiB/2004msec); 0 zone resets 00:26:35.947 slat (nsec): min=1627, max=225025, avg=1823.63, stdev=1621.66 00:26:35.947 clat (usec): min=2149, max=11535, avg=4504.10, stdev=799.63 00:26:35.947 lat (usec): min=2150, max=11558, avg=4505.92, stdev=799.84 00:26:35.947 clat percentiles (usec): 00:26:35.947 | 1.00th=[ 2769], 5.00th=[ 3261], 10.00th=[ 3556], 20.00th=[ 3949], 00:26:35.947 | 30.00th=[ 4178], 40.00th=[ 4359], 50.00th=[ 4490], 60.00th=[ 4621], 00:26:35.947 | 70.00th=[ 4752], 80.00th=[ 4948], 90.00th=[ 5276], 95.00th=[ 5800], 00:26:35.947 | 99.00th=[ 7308], 99.50th=[ 7767], 99.90th=[ 8979], 99.95th=[ 9896], 00:26:35.947 | 99.99th=[11338] 00:26:35.947 bw ( KiB/s): min=45720, max=47872, per=100.00%, avg=47070.00, stdev=1015.24, samples=4 00:26:35.947 iops : min=11430, max=11968, avg=11767.50, stdev=253.81, samples=4 00:26:35.947 lat (msec) : 4=11.45%, 10=86.62%, 20=1.93% 00:26:35.947 cpu : usr=69.05%, sys=24.86%, ctx=36, majf=0, minf=4 00:26:35.947 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:35.947 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.947 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:35.947 issued rwts: total=23691,23578,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.947 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:35.947 00:26:35.947 Run status group 0 (all jobs): 00:26:35.947 READ: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.5MiB (97.0MB), run=2004-2004msec 00:26:35.947 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.1MiB (96.6MB), run=2004-2004msec 00:26:35.947 14:08:26 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:35.947 14:08:26 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:35.947 14:08:26 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:35.947 14:08:26 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.947 14:08:26 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:35.947 14:08:26 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:35.947 14:08:26 -- common/autotest_common.sh@1320 -- # shift 00:26:35.947 14:08:26 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:35.947 14:08:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.947 14:08:26 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:35.947 14:08:26 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:35.947 14:08:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:35.947 14:08:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:35.948 14:08:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:35.948 14:08:26 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.948 14:08:26 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:35.948 14:08:26 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:35.948 14:08:26 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:35.948 14:08:26 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:35.948 14:08:26 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:35.948 14:08:26 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:35.948 14:08:26 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:35.948 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:35.948 fio-3.35 00:26:35.948 Starting 1 thread 00:26:35.948 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.483 00:26:38.483 test: (groupid=0, jobs=1): err= 0: pid=3392035: Tue Jul 23 14:08:29 2024 00:26:38.483 read: IOPS=9719, BW=152MiB/s (159MB/s)(305MiB/2005msec) 00:26:38.483 slat (nsec): min=2565, max=86380, avg=2858.32, stdev=1448.75 00:26:38.483 clat (usec): min=2753, max=36521, avg=8107.69, stdev=3423.22 00:26:38.483 lat (usec): min=2756, max=36523, avg=8110.55, stdev=3423.80 00:26:38.483 clat percentiles (usec): 00:26:38.483 | 1.00th=[ 3785], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 5932], 00:26:38.483 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 7504], 60.00th=[ 8029], 00:26:38.483 | 70.00th=[ 8455], 80.00th=[ 9372], 90.00th=[10945], 95.00th=[12780], 00:26:38.483 | 99.00th=[23725], 99.50th=[24511], 99.90th=[26870], 99.95th=[26870], 00:26:38.483 | 99.99th=[35914] 00:26:38.483 bw ( KiB/s): min=67712, max=83072, per=49.34%, avg=76728.00, stdev=6458.00, samples=4 00:26:38.483 iops : min= 4232, max= 5192, avg=4795.50, stdev=403.62, samples=4 00:26:38.483 write: IOPS=5749, BW=89.8MiB/s (94.2MB/s)(157MiB/1742msec); 0 zone resets 00:26:38.483 slat (usec): min=30, max=376, avg=32.15, stdev= 6.95 00:26:38.483 clat (usec): min=3104, max=31357, avg=8926.24, stdev=3082.29 00:26:38.483 lat (usec): min=3136, max=31431, avg=8958.39, stdev=3085.38 00:26:38.483 clat percentiles (usec): 00:26:38.483 | 1.00th=[ 5669], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 7308], 00:26:38.483 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8717], 00:26:38.483 | 70.00th=[ 9110], 80.00th=[ 9634], 90.00th=[10683], 95.00th=[12125], 00:26:38.483 | 99.00th=[26870], 99.50th=[27395], 99.90th=[28705], 99.95th=[28967], 00:26:38.483 | 99.99th=[31327] 00:26:38.483 bw ( KiB/s): min=71328, max=86272, per=86.78%, avg=79832.00, stdev=6262.16, samples=4 00:26:38.483 iops : min= 4458, max= 5392, avg=4989.50, stdev=391.39, samples=4 00:26:38.483 lat (msec) : 4=1.10%, 10=84.01%, 20=12.35%, 50=2.54% 00:26:38.483 cpu : usr=84.63%, sys=12.23%, ctx=33, majf=0, minf=1 00:26:38.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:26:38.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:38.483 issued rwts: total=19488,10016,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:38.484 00:26:38.484 Run status group 0 (all jobs): 00:26:38.484 READ: bw=152MiB/s (159MB/s), 152MiB/s-152MiB/s (159MB/s-159MB/s), io=305MiB (319MB), run=2005-2005msec 00:26:38.484 WRITE: bw=89.8MiB/s (94.2MB/s), 89.8MiB/s-89.8MiB/s (94.2MB/s-94.2MB/s), io=157MiB (164MB), run=1742-1742msec 00:26:38.484 14:08:29 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:38.484 14:08:29 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:26:38.484 14:08:29 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:26:38.484 14:08:29 -- host/fio.sh@51 -- # get_nvme_bdfs 00:26:38.484 14:08:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:38.484 14:08:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:26:38.484 14:08:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:38.484 14:08:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:38.484 14:08:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:38.484 14:08:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:26:38.484 14:08:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:26:38.484 14:08:29 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:26:41.775 Nvme0n1 00:26:41.775 14:08:32 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:26:44.367 14:08:35 -- host/fio.sh@53 -- # ls_guid=ec8843d6-a133-4f93-8e0e-5d24f6e712b1 00:26:44.367 14:08:35 -- host/fio.sh@54 -- # get_lvs_free_mb ec8843d6-a133-4f93-8e0e-5d24f6e712b1 00:26:44.367 14:08:35 -- common/autotest_common.sh@1343 -- # local lvs_uuid=ec8843d6-a133-4f93-8e0e-5d24f6e712b1 00:26:44.367 14:08:35 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:44.367 14:08:35 -- common/autotest_common.sh@1345 -- # local fc 00:26:44.367 14:08:35 -- common/autotest_common.sh@1346 -- # local cs 00:26:44.367 14:08:35 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:44.626 14:08:35 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:44.626 { 00:26:44.626 "uuid": "ec8843d6-a133-4f93-8e0e-5d24f6e712b1", 00:26:44.626 "name": "lvs_0", 00:26:44.626 "base_bdev": "Nvme0n1", 00:26:44.626 "total_data_clusters": 930, 00:26:44.626 "free_clusters": 930, 00:26:44.626 "block_size": 512, 00:26:44.626 "cluster_size": 1073741824 00:26:44.626 } 00:26:44.626 ]' 00:26:44.626 14:08:35 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="ec8843d6-a133-4f93-8e0e-5d24f6e712b1") .free_clusters' 00:26:44.626 14:08:35 -- common/autotest_common.sh@1348 -- # fc=930 00:26:44.626 14:08:35 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="ec8843d6-a133-4f93-8e0e-5d24f6e712b1") .cluster_size' 00:26:44.626 14:08:35 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:26:44.626 14:08:35 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:26:44.626 14:08:35 -- common/autotest_common.sh@1353 -- # echo 952320 00:26:44.626 952320 00:26:44.626 14:08:35 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:26:44.885 16d24ac6-83ca-428f-9a57-f9dbc1c13f96 00:26:44.885 14:08:35 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:26:45.144 14:08:36 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:26:45.403 14:08:36 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:45.403 14:08:36 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:45.403 14:08:36 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:45.403 14:08:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:45.403 14:08:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.403 14:08:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:45.403 14:08:36 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.403 14:08:36 -- common/autotest_common.sh@1320 -- # shift 00:26:45.403 14:08:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:45.403 14:08:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.403 14:08:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.403 14:08:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:45.403 14:08:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:45.403 14:08:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:45.403 14:08:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:45.403 14:08:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.661 14:08:36 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:45.661 14:08:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:45.661 14:08:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:45.661 14:08:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:45.661 14:08:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:45.661 14:08:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:45.661 14:08:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:45.920 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:45.920 fio-3.35 00:26:45.920 Starting 1 thread 00:26:45.920 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.457 00:26:48.457 test: (groupid=0, jobs=1): err= 0: pid=3393800: Tue Jul 23 14:08:39 2024 00:26:48.457 read: IOPS=8084, BW=31.6MiB/s (33.1MB/s)(63.4MiB/2007msec) 00:26:48.457 slat (nsec): min=1587, max=95117, avg=1706.87, stdev=1035.94 00:26:48.457 clat (usec): min=1563, max=173650, avg=8931.50, stdev=10587.03 00:26:48.457 lat (usec): min=1564, max=173658, avg=8933.20, stdev=10587.18 00:26:48.457 clat percentiles (msec): 00:26:48.457 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:26:48.457 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:26:48.457 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 12], 00:26:48.457 | 99.00th=[ 15], 99.50th=[ 17], 99.90th=[ 174], 99.95th=[ 174], 00:26:48.457 | 99.99th=[ 174] 00:26:48.457 bw ( KiB/s): min=22120, max=35848, per=100.00%, avg=32338.00, stdev=6812.74, samples=4 00:26:48.457 iops : min= 5530, max= 8962, avg=8084.50, stdev=1703.19, samples=4 00:26:48.457 write: IOPS=8080, BW=31.6MiB/s (33.1MB/s)(63.4MiB/2007msec); 0 zone resets 00:26:48.457 slat (nsec): min=1641, max=86526, avg=1792.44, stdev=773.59 00:26:48.457 clat (usec): min=319, max=171367, avg=6774.67, stdev=9738.62 00:26:48.457 lat (usec): min=322, max=171454, avg=6776.46, stdev=9738.80 00:26:48.457 clat percentiles (msec): 00:26:48.457 | 1.00th=[ 4], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 6], 00:26:48.457 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 7], 60.00th=[ 7], 00:26:48.457 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:26:48.457 | 99.00th=[ 10], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 171], 00:26:48.457 | 99.99th=[ 171] 00:26:48.457 bw ( KiB/s): min=23144, max=35552, per=99.91%, avg=32294.00, stdev=6103.62, samples=4 00:26:48.457 iops : min= 5786, max= 8888, avg=8073.50, stdev=1525.91, samples=4 00:26:48.457 lat (usec) : 500=0.01% 00:26:48.457 lat (msec) : 2=0.03%, 4=0.90%, 10=93.73%, 20=4.93%, 250=0.39% 00:26:48.457 cpu : usr=65.40%, sys=28.66%, ctx=45, majf=0, minf=4 00:26:48.457 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:48.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.457 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.457 issued rwts: total=16225,16218,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.457 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.457 00:26:48.457 Run status group 0 (all jobs): 00:26:48.457 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.4MiB (66.5MB), run=2007-2007msec 00:26:48.457 WRITE: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.4MiB (66.4MB), run=2007-2007msec 00:26:48.457 14:08:39 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:48.457 14:08:39 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:26:49.395 14:08:40 -- host/fio.sh@64 -- # ls_nested_guid=09afd84c-b4ce-40ab-a55b-00864ba984af 00:26:49.395 14:08:40 -- host/fio.sh@65 -- # get_lvs_free_mb 09afd84c-b4ce-40ab-a55b-00864ba984af 00:26:49.395 14:08:40 -- common/autotest_common.sh@1343 -- # local lvs_uuid=09afd84c-b4ce-40ab-a55b-00864ba984af 00:26:49.395 14:08:40 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:49.395 14:08:40 -- common/autotest_common.sh@1345 -- # local fc 00:26:49.395 14:08:40 -- common/autotest_common.sh@1346 -- # local cs 00:26:49.395 14:08:40 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:49.655 14:08:40 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:49.655 { 00:26:49.655 "uuid": "ec8843d6-a133-4f93-8e0e-5d24f6e712b1", 00:26:49.655 "name": "lvs_0", 00:26:49.655 "base_bdev": "Nvme0n1", 00:26:49.655 "total_data_clusters": 930, 00:26:49.655 "free_clusters": 0, 00:26:49.655 "block_size": 512, 00:26:49.655 "cluster_size": 1073741824 00:26:49.655 }, 00:26:49.655 { 00:26:49.655 "uuid": "09afd84c-b4ce-40ab-a55b-00864ba984af", 00:26:49.655 "name": "lvs_n_0", 00:26:49.655 "base_bdev": "16d24ac6-83ca-428f-9a57-f9dbc1c13f96", 00:26:49.655 "total_data_clusters": 237847, 00:26:49.655 "free_clusters": 237847, 00:26:49.655 "block_size": 512, 00:26:49.655 "cluster_size": 4194304 00:26:49.655 } 00:26:49.655 ]' 00:26:49.655 14:08:40 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="09afd84c-b4ce-40ab-a55b-00864ba984af") .free_clusters' 00:26:49.655 14:08:40 -- common/autotest_common.sh@1348 -- # fc=237847 00:26:49.655 14:08:40 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="09afd84c-b4ce-40ab-a55b-00864ba984af") .cluster_size' 00:26:49.655 14:08:40 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:49.655 14:08:40 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:26:49.655 14:08:40 -- common/autotest_common.sh@1353 -- # echo 951388 00:26:49.655 951388 00:26:49.655 14:08:40 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:26:50.224 0cb008c0-b241-4b21-896f-b07be776cf3a 00:26:50.224 14:08:41 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:26:50.483 14:08:41 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:26:50.483 14:08:41 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:50.743 14:08:41 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:50.743 14:08:41 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:50.743 14:08:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:50.743 14:08:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:50.743 14:08:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:50.743 14:08:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:50.743 14:08:41 -- common/autotest_common.sh@1320 -- # shift 00:26:50.743 14:08:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:50.743 14:08:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:50.743 14:08:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:50.743 14:08:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:50.743 14:08:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:50.743 14:08:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:50.743 14:08:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:26:50.743 14:08:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:51.002 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:51.002 fio-3.35 00:26:51.002 Starting 1 thread 00:26:51.002 EAL: No free 2048 kB hugepages reported on node 1 00:26:53.541 00:26:53.541 test: (groupid=0, jobs=1): err= 0: pid=3394857: Tue Jul 23 14:08:44 2024 00:26:53.541 read: IOPS=7879, BW=30.8MiB/s (32.3MB/s)(61.7MiB/2005msec) 00:26:53.541 slat (nsec): min=1581, max=112136, avg=1711.09, stdev=1141.91 00:26:53.541 clat (usec): min=3654, max=21766, avg=9299.56, stdev=1974.39 00:26:53.541 lat (usec): min=3658, max=21768, avg=9301.27, stdev=1974.37 00:26:53.541 clat percentiles (usec): 00:26:53.541 | 1.00th=[ 6063], 5.00th=[ 6915], 10.00th=[ 7439], 20.00th=[ 8029], 00:26:53.541 | 30.00th=[ 8356], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9241], 00:26:53.541 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[11731], 95.00th=[13304], 00:26:53.541 | 99.00th=[16450], 99.50th=[18220], 99.90th=[21627], 99.95th=[21627], 00:26:53.541 | 99.99th=[21627] 00:26:53.541 bw ( KiB/s): min=29544, max=32336, per=99.79%, avg=31450.00, stdev=1283.57, samples=4 00:26:53.541 iops : min= 7386, max= 8084, avg=7862.50, stdev=320.89, samples=4 00:26:53.541 write: IOPS=7853, BW=30.7MiB/s (32.2MB/s)(61.5MiB/2005msec); 0 zone resets 00:26:53.541 slat (nsec): min=1623, max=81746, avg=1789.49, stdev=719.41 00:26:53.541 clat (usec): min=1842, max=14642, avg=6904.50, stdev=1256.64 00:26:53.541 lat (usec): min=1847, max=14643, avg=6906.29, stdev=1256.65 00:26:53.541 clat percentiles (usec): 00:26:53.541 | 1.00th=[ 4047], 5.00th=[ 4883], 10.00th=[ 5407], 20.00th=[ 5997], 00:26:53.541 | 30.00th=[ 6390], 40.00th=[ 6652], 50.00th=[ 6915], 60.00th=[ 7111], 00:26:53.541 | 70.00th=[ 7373], 80.00th=[ 7701], 90.00th=[ 8291], 95.00th=[ 8979], 00:26:53.541 | 99.00th=[10683], 99.50th=[11600], 99.90th=[13173], 99.95th=[13566], 00:26:53.541 | 99.99th=[14484] 00:26:53.541 bw ( KiB/s): min=30784, max=31624, per=99.91%, avg=31386.00, stdev=402.41, samples=4 00:26:53.541 iops : min= 7696, max= 7906, avg=7846.50, stdev=100.60, samples=4 00:26:53.541 lat (msec) : 2=0.01%, 4=0.49%, 10=87.52%, 20=11.87%, 50=0.12% 00:26:53.541 cpu : usr=64.42%, sys=29.69%, ctx=37, majf=0, minf=4 00:26:53.541 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:53.541 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:53.541 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:53.541 issued rwts: total=15798,15747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:53.541 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:53.541 00:26:53.541 Run status group 0 (all jobs): 00:26:53.541 READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.7MiB (64.7MB), run=2005-2005msec 00:26:53.541 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.5MiB (64.5MB), run=2005-2005msec 00:26:53.541 14:08:44 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:53.800 14:08:44 -- host/fio.sh@74 -- # sync 00:26:53.800 14:08:44 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:26:57.994 14:08:48 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:26:57.994 14:08:48 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:00.531 14:08:51 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:00.531 14:08:51 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:02.437 14:08:53 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:02.437 14:08:53 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:02.437 14:08:53 -- host/fio.sh@86 -- # nvmftestfini 00:27:02.437 14:08:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:02.437 14:08:53 -- nvmf/common.sh@116 -- # sync 00:27:02.437 14:08:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:02.437 14:08:53 -- nvmf/common.sh@119 -- # set +e 00:27:02.437 14:08:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:02.437 14:08:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:02.437 rmmod nvme_tcp 00:27:02.437 rmmod nvme_fabrics 00:27:02.437 rmmod nvme_keyring 00:27:02.437 14:08:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:02.437 14:08:53 -- nvmf/common.sh@123 -- # set -e 00:27:02.437 14:08:53 -- nvmf/common.sh@124 -- # return 0 00:27:02.437 14:08:53 -- nvmf/common.sh@477 -- # '[' -n 3391064 ']' 00:27:02.437 14:08:53 -- nvmf/common.sh@478 -- # killprocess 3391064 00:27:02.437 14:08:53 -- common/autotest_common.sh@926 -- # '[' -z 3391064 ']' 00:27:02.437 14:08:53 -- common/autotest_common.sh@930 -- # kill -0 3391064 00:27:02.437 14:08:53 -- common/autotest_common.sh@931 -- # uname 00:27:02.437 14:08:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:02.437 14:08:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3391064 00:27:02.437 14:08:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:02.437 14:08:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:02.437 14:08:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3391064' 00:27:02.437 killing process with pid 3391064 00:27:02.437 14:08:53 -- common/autotest_common.sh@945 -- # kill 3391064 00:27:02.437 14:08:53 -- common/autotest_common.sh@950 -- # wait 3391064 00:27:02.697 14:08:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:02.697 14:08:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:02.697 14:08:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:02.697 14:08:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:02.697 14:08:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:02.697 14:08:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.697 14:08:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:02.697 14:08:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.237 14:08:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:05.237 00:27:05.237 real 0m38.941s 00:27:05.237 user 2m37.784s 00:27:05.237 sys 0m8.238s 00:27:05.237 14:08:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:05.237 14:08:55 -- common/autotest_common.sh@10 -- # set +x 00:27:05.237 ************************************ 00:27:05.237 END TEST nvmf_fio_host 00:27:05.237 ************************************ 00:27:05.237 14:08:55 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:05.237 14:08:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:05.237 14:08:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:05.237 14:08:55 -- common/autotest_common.sh@10 -- # set +x 00:27:05.237 ************************************ 00:27:05.237 START TEST nvmf_failover 00:27:05.237 ************************************ 00:27:05.237 14:08:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:05.237 * Looking for test storage... 00:27:05.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:05.237 14:08:55 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:05.237 14:08:55 -- nvmf/common.sh@7 -- # uname -s 00:27:05.237 14:08:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:05.237 14:08:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:05.237 14:08:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:05.237 14:08:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:05.237 14:08:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:05.237 14:08:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:05.237 14:08:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:05.237 14:08:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:05.237 14:08:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:05.237 14:08:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:05.237 14:08:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:05.237 14:08:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:05.237 14:08:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:05.237 14:08:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:05.237 14:08:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:05.237 14:08:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:05.237 14:08:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:05.237 14:08:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:05.237 14:08:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:05.237 14:08:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.237 14:08:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.237 14:08:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.237 14:08:55 -- paths/export.sh@5 -- # export PATH 00:27:05.237 14:08:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:05.237 14:08:55 -- nvmf/common.sh@46 -- # : 0 00:27:05.237 14:08:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:05.237 14:08:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:05.237 14:08:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:05.237 14:08:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:05.237 14:08:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:05.237 14:08:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:05.237 14:08:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:05.237 14:08:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:05.237 14:08:55 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:05.237 14:08:55 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:05.237 14:08:55 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:05.237 14:08:55 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:05.237 14:08:55 -- host/failover.sh@18 -- # nvmftestinit 00:27:05.237 14:08:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:05.237 14:08:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:05.237 14:08:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:05.237 14:08:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:05.237 14:08:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:05.237 14:08:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:05.237 14:08:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:05.237 14:08:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.237 14:08:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:05.237 14:08:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:05.237 14:08:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:05.237 14:08:55 -- common/autotest_common.sh@10 -- # set +x 00:27:10.541 14:09:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:10.541 14:09:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:10.541 14:09:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:10.541 14:09:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:10.541 14:09:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:10.541 14:09:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:10.541 14:09:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:10.541 14:09:00 -- nvmf/common.sh@294 -- # net_devs=() 00:27:10.541 14:09:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:10.541 14:09:00 -- nvmf/common.sh@295 -- # e810=() 00:27:10.541 14:09:00 -- nvmf/common.sh@295 -- # local -ga e810 00:27:10.541 14:09:00 -- nvmf/common.sh@296 -- # x722=() 00:27:10.541 14:09:00 -- nvmf/common.sh@296 -- # local -ga x722 00:27:10.542 14:09:00 -- nvmf/common.sh@297 -- # mlx=() 00:27:10.542 14:09:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:10.542 14:09:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.542 14:09:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:10.542 14:09:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:10.542 14:09:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:10.542 14:09:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:10.542 14:09:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:10.542 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:10.542 14:09:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:10.542 14:09:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:10.542 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:10.542 14:09:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:10.542 14:09:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:10.542 14:09:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.542 14:09:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:10.542 14:09:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.542 14:09:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:10.542 Found net devices under 0000:86:00.0: cvl_0_0 00:27:10.542 14:09:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.542 14:09:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:10.542 14:09:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.542 14:09:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:10.542 14:09:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.542 14:09:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:10.542 Found net devices under 0000:86:00.1: cvl_0_1 00:27:10.542 14:09:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.542 14:09:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:10.542 14:09:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:10.542 14:09:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:10.542 14:09:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:10.542 14:09:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:10.542 14:09:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.542 14:09:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:10.542 14:09:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:10.542 14:09:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:10.542 14:09:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:10.542 14:09:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:10.542 14:09:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:10.542 14:09:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.542 14:09:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:10.542 14:09:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:10.542 14:09:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:10.542 14:09:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:10.542 14:09:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:10.542 14:09:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:10.542 14:09:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:10.542 14:09:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:10.542 14:09:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:10.542 14:09:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:10.542 14:09:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:10.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:10.542 00:27:10.542 --- 10.0.0.2 ping statistics --- 00:27:10.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.542 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:10.542 14:09:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:10.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.308 ms 00:27:10.542 00:27:10.542 --- 10.0.0.1 ping statistics --- 00:27:10.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.542 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:27:10.542 14:09:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.542 14:09:01 -- nvmf/common.sh@410 -- # return 0 00:27:10.542 14:09:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:10.542 14:09:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.542 14:09:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:10.542 14:09:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:10.542 14:09:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.542 14:09:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:10.542 14:09:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:10.542 14:09:01 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:10.542 14:09:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:10.542 14:09:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:10.542 14:09:01 -- common/autotest_common.sh@10 -- # set +x 00:27:10.542 14:09:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:10.542 14:09:01 -- nvmf/common.sh@469 -- # nvmfpid=3400097 00:27:10.542 14:09:01 -- nvmf/common.sh@470 -- # waitforlisten 3400097 00:27:10.542 14:09:01 -- common/autotest_common.sh@819 -- # '[' -z 3400097 ']' 00:27:10.542 14:09:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.542 14:09:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:10.542 14:09:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.542 14:09:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:10.542 14:09:01 -- common/autotest_common.sh@10 -- # set +x 00:27:10.542 [2024-07-23 14:09:01.148216] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:10.542 [2024-07-23 14:09:01.148260] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:10.542 EAL: No free 2048 kB hugepages reported on node 1 00:27:10.542 [2024-07-23 14:09:01.205983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:10.542 [2024-07-23 14:09:01.281907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:10.542 [2024-07-23 14:09:01.282011] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:10.542 [2024-07-23 14:09:01.282020] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:10.542 [2024-07-23 14:09:01.282026] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:10.542 [2024-07-23 14:09:01.282132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:10.542 [2024-07-23 14:09:01.282216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:10.542 [2024-07-23 14:09:01.282217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.109 14:09:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:11.109 14:09:01 -- common/autotest_common.sh@852 -- # return 0 00:27:11.109 14:09:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:11.109 14:09:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:11.109 14:09:01 -- common/autotest_common.sh@10 -- # set +x 00:27:11.109 14:09:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.109 14:09:01 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:11.367 [2024-07-23 14:09:02.138863] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:11.367 14:09:02 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:11.367 Malloc0 00:27:11.367 14:09:02 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:11.625 14:09:02 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:11.883 14:09:02 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:11.883 [2024-07-23 14:09:02.868005] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:11.883 14:09:02 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:12.142 [2024-07-23 14:09:03.052596] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:12.142 14:09:03 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:12.400 [2024-07-23 14:09:03.225174] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:12.400 14:09:03 -- host/failover.sh@31 -- # bdevperf_pid=3400635 00:27:12.400 14:09:03 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:12.401 14:09:03 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:12.401 14:09:03 -- host/failover.sh@34 -- # waitforlisten 3400635 /var/tmp/bdevperf.sock 00:27:12.401 14:09:03 -- common/autotest_common.sh@819 -- # '[' -z 3400635 ']' 00:27:12.401 14:09:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:12.401 14:09:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:12.401 14:09:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:12.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:12.401 14:09:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:12.401 14:09:03 -- common/autotest_common.sh@10 -- # set +x 00:27:13.337 14:09:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:13.337 14:09:04 -- common/autotest_common.sh@852 -- # return 0 00:27:13.337 14:09:04 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.596 NVMe0n1 00:27:13.596 14:09:04 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:13.855 00:27:13.855 14:09:04 -- host/failover.sh@39 -- # run_test_pid=3400879 00:27:13.855 14:09:04 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:13.855 14:09:04 -- host/failover.sh@41 -- # sleep 1 00:27:15.235 14:09:05 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:15.235 [2024-07-23 14:09:06.032595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032721] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032794] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032869] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.235 [2024-07-23 14:09:06.032920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032934] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.032998] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033071] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 [2024-07-23 14:09:06.033089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1310600 is same with the state(5) to be set 00:27:15.236 14:09:06 -- host/failover.sh@45 -- # sleep 3 00:27:18.529 14:09:09 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:18.529 00:27:18.529 14:09:09 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:18.529 [2024-07-23 14:09:09.482408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482491] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482515] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.529 [2024-07-23 14:09:09.482637] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482649] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 [2024-07-23 14:09:09.482742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1311240 is same with the state(5) to be set 00:27:18.530 14:09:09 -- host/failover.sh@50 -- # sleep 3 00:27:21.823 14:09:12 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.823 [2024-07-23 14:09:12.667965] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.823 14:09:12 -- host/failover.sh@55 -- # sleep 1 00:27:22.761 14:09:13 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:23.021 [2024-07-23 14:09:13.857991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858215] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858276] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858301] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 [2024-07-23 14:09:13.858319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x116bc10 is same with the state(5) to be set 00:27:23.021 14:09:13 -- host/failover.sh@59 -- # wait 3400879 00:27:29.600 0 00:27:29.600 14:09:20 -- host/failover.sh@61 -- # killprocess 3400635 00:27:29.600 14:09:20 -- common/autotest_common.sh@926 -- # '[' -z 3400635 ']' 00:27:29.600 14:09:20 -- common/autotest_common.sh@930 -- # kill -0 3400635 00:27:29.600 14:09:20 -- common/autotest_common.sh@931 -- # uname 00:27:29.600 14:09:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:29.600 14:09:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3400635 00:27:29.600 14:09:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:29.600 14:09:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:29.600 14:09:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3400635' 00:27:29.600 killing process with pid 3400635 00:27:29.600 14:09:20 -- common/autotest_common.sh@945 -- # kill 3400635 00:27:29.600 14:09:20 -- common/autotest_common.sh@950 -- # wait 3400635 00:27:29.600 14:09:20 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:29.600 [2024-07-23 14:09:03.293350] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:29.600 [2024-07-23 14:09:03.293402] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400635 ] 00:27:29.600 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.600 [2024-07-23 14:09:03.347777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.600 [2024-07-23 14:09:03.421698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.600 Running I/O for 15 seconds... 00:27:29.600 [2024-07-23 14:09:06.033389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:12440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:12472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:13024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.600 [2024-07-23 14:09:06.033763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.600 [2024-07-23 14:09:06.033772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.033985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.033992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:13256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:12704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:12720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.601 [2024-07-23 14:09:06.034332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:13304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.601 [2024-07-23 14:09:06.034369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.601 [2024-07-23 14:09:06.034377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.602 [2024-07-23 14:09:06.034437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.602 [2024-07-23 14:09:06.034468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.602 [2024-07-23 14:09:06.034483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:13384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.602 [2024-07-23 14:09:06.034618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.602 [2024-07-23 14:09:06.034648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.602 [2024-07-23 14:09:06.034951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:13440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.602 [2024-07-23 14:09:06.034965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.602 [2024-07-23 14:09:06.034973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.034979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.034987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.034994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:13496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:13552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.603 [2024-07-23 14:09:06.035251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:06.035359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x59b010 is same with the state(5) to be set 00:27:29.603 [2024-07-23 14:09:06.035375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:29.603 [2024-07-23 14:09:06.035381] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:29.603 [2024-07-23 14:09:06.035387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13240 len:8 PRP1 0x0 PRP2 0x0 00:27:29.603 [2024-07-23 14:09:06.035394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035437] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x59b010 was disconnected and freed. reset controller. 00:27:29.603 [2024-07-23 14:09:06.035451] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:29.603 [2024-07-23 14:09:06.035472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.603 [2024-07-23 14:09:06.035479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.603 [2024-07-23 14:09:06.035493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.603 [2024-07-23 14:09:06.035506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.603 [2024-07-23 14:09:06.035520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:06.035526] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.603 [2024-07-23 14:09:06.037353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.603 [2024-07-23 14:09:06.037379] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a5010 (9): Bad file descriptor 00:27:29.603 [2024-07-23 14:09:06.070356] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:29.603 [2024-07-23 14:09:09.482914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:09.482948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:09.482963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:09.482975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:09.482984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:09.482991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:09.483000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:09.483006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:09.483015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.603 [2024-07-23 14:09:09.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.603 [2024-07-23 14:09:09.483030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:96944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.604 [2024-07-23 14:09:09.483502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.604 [2024-07-23 14:09:09.483510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.604 [2024-07-23 14:09:09.483517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:97016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.483981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.483989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.605 [2024-07-23 14:09:09.483996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.605 [2024-07-23 14:09:09.484120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.605 [2024-07-23 14:09:09.484129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:97344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.606 [2024-07-23 14:09:09.484677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.606 [2024-07-23 14:09:09.484715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.606 [2024-07-23 14:09:09.484722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.607 [2024-07-23 14:09:09.484739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.607 [2024-07-23 14:09:09.484754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.607 [2024-07-23 14:09:09.484768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.607 [2024-07-23 14:09:09.484783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:09.484886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5b1560 is same with the state(5) to be set 00:27:29.607 [2024-07-23 14:09:09.484904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:29.607 [2024-07-23 14:09:09.484912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:29.607 [2024-07-23 14:09:09.484918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96888 len:8 PRP1 0x0 PRP2 0x0 00:27:29.607 [2024-07-23 14:09:09.484925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.484966] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5b1560 was disconnected and freed. reset controller. 00:27:29.607 [2024-07-23 14:09:09.484976] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:29.607 [2024-07-23 14:09:09.484996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.607 [2024-07-23 14:09:09.485004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.485011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.607 [2024-07-23 14:09:09.485019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.485026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.607 [2024-07-23 14:09:09.485033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.485040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.607 [2024-07-23 14:09:09.485052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:09.485058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.607 [2024-07-23 14:09:09.486858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.607 [2024-07-23 14:09:09.486885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a5010 (9): Bad file descriptor 00:27:29.607 [2024-07-23 14:09:09.641863] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:29.607 [2024-07-23 14:09:13.858484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:98208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.607 [2024-07-23 14:09:13.858817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.607 [2024-07-23 14:09:13.858824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.858990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.858998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.608 [2024-07-23 14:09:13.859417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.608 [2024-07-23 14:09:13.859425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.608 [2024-07-23 14:09:13.859432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.609 [2024-07-23 14:09:13.859798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.609 [2024-07-23 14:09:13.859932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.609 [2024-07-23 14:09:13.859941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.859948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.859957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.859964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.859972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.859978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.859986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.859993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:29.610 [2024-07-23 14:09:13.860354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.610 [2024-07-23 14:09:13.860458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a0de0 is same with the state(5) to be set 00:27:29.610 [2024-07-23 14:09:13.860474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:29.610 [2024-07-23 14:09:13.860480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:29.610 [2024-07-23 14:09:13.860486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98992 len:8 PRP1 0x0 PRP2 0x0 00:27:29.610 [2024-07-23 14:09:13.860492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860535] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5a0de0 was disconnected and freed. reset controller. 00:27:29.610 [2024-07-23 14:09:13.860545] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:29.610 [2024-07-23 14:09:13.860565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.610 [2024-07-23 14:09:13.860574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.610 [2024-07-23 14:09:13.860591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.610 [2024-07-23 14:09:13.860598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.610 [2024-07-23 14:09:13.860604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.611 [2024-07-23 14:09:13.860611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:29.611 [2024-07-23 14:09:13.860617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:29.611 [2024-07-23 14:09:13.860626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:29.611 [2024-07-23 14:09:13.862413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:29.611 [2024-07-23 14:09:13.862439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5a5010 (9): Bad file descriptor 00:27:29.611 [2024-07-23 14:09:13.980863] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:29.611 00:27:29.611 Latency(us) 00:27:29.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.611 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:29.611 Verification LBA range: start 0x0 length 0x4000 00:27:29.611 NVMe0n1 : 15.01 16411.39 64.11 1461.08 0.00 7148.64 883.31 24504.77 00:27:29.611 =================================================================================================================== 00:27:29.611 Total : 16411.39 64.11 1461.08 0.00 7148.64 883.31 24504.77 00:27:29.611 Received shutdown signal, test time was about 15.000000 seconds 00:27:29.611 00:27:29.611 Latency(us) 00:27:29.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.611 =================================================================================================================== 00:27:29.611 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.611 14:09:20 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:29.611 14:09:20 -- host/failover.sh@65 -- # count=3 00:27:29.611 14:09:20 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:29.611 14:09:20 -- host/failover.sh@73 -- # bdevperf_pid=3403830 00:27:29.611 14:09:20 -- host/failover.sh@75 -- # waitforlisten 3403830 /var/tmp/bdevperf.sock 00:27:29.611 14:09:20 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:29.611 14:09:20 -- common/autotest_common.sh@819 -- # '[' -z 3403830 ']' 00:27:29.611 14:09:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:29.611 14:09:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:29.611 14:09:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:29.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:29.611 14:09:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:29.611 14:09:20 -- common/autotest_common.sh@10 -- # set +x 00:27:30.179 14:09:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:30.179 14:09:21 -- common/autotest_common.sh@852 -- # return 0 00:27:30.179 14:09:21 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:30.438 [2024-07-23 14:09:21.267540] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:30.438 14:09:21 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:30.438 [2024-07-23 14:09:21.452109] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:30.697 14:09:21 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:30.697 NVMe0n1 00:27:30.957 14:09:21 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.215 00:27:31.215 14:09:22 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:31.783 00:27:31.783 14:09:22 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:31.783 14:09:22 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:31.783 14:09:22 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:32.042 14:09:22 -- host/failover.sh@87 -- # sleep 3 00:27:35.377 14:09:25 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:35.377 14:09:25 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:35.377 14:09:26 -- host/failover.sh@90 -- # run_test_pid=3404780 00:27:35.377 14:09:26 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:35.377 14:09:26 -- host/failover.sh@92 -- # wait 3404780 00:27:36.314 0 00:27:36.314 14:09:27 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:36.314 [2024-07-23 14:09:20.314796] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:36.314 [2024-07-23 14:09:20.314849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403830 ] 00:27:36.315 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.315 [2024-07-23 14:09:20.369822] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.315 [2024-07-23 14:09:20.436461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.315 [2024-07-23 14:09:22.869897] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:36.315 [2024-07-23 14:09:22.869943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.315 [2024-07-23 14:09:22.869954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.315 [2024-07-23 14:09:22.869963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.315 [2024-07-23 14:09:22.869970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.315 [2024-07-23 14:09:22.869977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.315 [2024-07-23 14:09:22.869984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.315 [2024-07-23 14:09:22.869992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.315 [2024-07-23 14:09:22.869998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.315 [2024-07-23 14:09:22.870005] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:36.315 [2024-07-23 14:09:22.870025] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:36.315 [2024-07-23 14:09:22.870038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ba7010 (9): Bad file descriptor 00:27:36.315 [2024-07-23 14:09:22.876680] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:36.315 Running I/O for 1 seconds... 00:27:36.315 00:27:36.315 Latency(us) 00:27:36.315 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.315 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:36.315 Verification LBA range: start 0x0 length 0x4000 00:27:36.315 NVMe0n1 : 1.01 16529.34 64.57 0.00 0.00 7711.78 1332.09 19375.86 00:27:36.315 =================================================================================================================== 00:27:36.315 Total : 16529.34 64.57 0.00 0.00 7711.78 1332.09 19375.86 00:27:36.315 14:09:27 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:36.315 14:09:27 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:36.574 14:09:27 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:36.574 14:09:27 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:36.574 14:09:27 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:36.833 14:09:27 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:37.092 14:09:27 -- host/failover.sh@101 -- # sleep 3 00:27:40.383 14:09:30 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:40.383 14:09:30 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:40.383 14:09:31 -- host/failover.sh@108 -- # killprocess 3403830 00:27:40.383 14:09:31 -- common/autotest_common.sh@926 -- # '[' -z 3403830 ']' 00:27:40.383 14:09:31 -- common/autotest_common.sh@930 -- # kill -0 3403830 00:27:40.383 14:09:31 -- common/autotest_common.sh@931 -- # uname 00:27:40.383 14:09:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.383 14:09:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3403830 00:27:40.383 14:09:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:40.383 14:09:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:40.383 14:09:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3403830' 00:27:40.383 killing process with pid 3403830 00:27:40.383 14:09:31 -- common/autotest_common.sh@945 -- # kill 3403830 00:27:40.383 14:09:31 -- common/autotest_common.sh@950 -- # wait 3403830 00:27:40.384 14:09:31 -- host/failover.sh@110 -- # sync 00:27:40.384 14:09:31 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.643 14:09:31 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:40.643 14:09:31 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:40.643 14:09:31 -- host/failover.sh@116 -- # nvmftestfini 00:27:40.643 14:09:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:40.643 14:09:31 -- nvmf/common.sh@116 -- # sync 00:27:40.643 14:09:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:40.643 14:09:31 -- nvmf/common.sh@119 -- # set +e 00:27:40.643 14:09:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:40.643 14:09:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:40.643 rmmod nvme_tcp 00:27:40.643 rmmod nvme_fabrics 00:27:40.643 rmmod nvme_keyring 00:27:40.643 14:09:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:40.643 14:09:31 -- nvmf/common.sh@123 -- # set -e 00:27:40.643 14:09:31 -- nvmf/common.sh@124 -- # return 0 00:27:40.643 14:09:31 -- nvmf/common.sh@477 -- # '[' -n 3400097 ']' 00:27:40.643 14:09:31 -- nvmf/common.sh@478 -- # killprocess 3400097 00:27:40.643 14:09:31 -- common/autotest_common.sh@926 -- # '[' -z 3400097 ']' 00:27:40.643 14:09:31 -- common/autotest_common.sh@930 -- # kill -0 3400097 00:27:40.643 14:09:31 -- common/autotest_common.sh@931 -- # uname 00:27:40.643 14:09:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:40.643 14:09:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3400097 00:27:40.643 14:09:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:27:40.643 14:09:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:27:40.643 14:09:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3400097' 00:27:40.643 killing process with pid 3400097 00:27:40.643 14:09:31 -- common/autotest_common.sh@945 -- # kill 3400097 00:27:40.643 14:09:31 -- common/autotest_common.sh@950 -- # wait 3400097 00:27:40.903 14:09:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:40.903 14:09:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:40.903 14:09:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:40.903 14:09:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:40.903 14:09:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:40.903 14:09:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.903 14:09:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.903 14:09:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.440 14:09:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:43.440 00:27:43.440 real 0m38.224s 00:27:43.440 user 2m3.632s 00:27:43.440 sys 0m7.490s 00:27:43.440 14:09:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.440 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:27:43.440 ************************************ 00:27:43.440 END TEST nvmf_failover 00:27:43.440 ************************************ 00:27:43.440 14:09:33 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:43.440 14:09:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:43.440 14:09:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:43.440 14:09:33 -- common/autotest_common.sh@10 -- # set +x 00:27:43.440 ************************************ 00:27:43.440 START TEST nvmf_discovery 00:27:43.440 ************************************ 00:27:43.440 14:09:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:43.440 * Looking for test storage... 00:27:43.440 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.440 14:09:34 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.440 14:09:34 -- nvmf/common.sh@7 -- # uname -s 00:27:43.440 14:09:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.440 14:09:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.440 14:09:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.440 14:09:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.440 14:09:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.440 14:09:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.440 14:09:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.440 14:09:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.440 14:09:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.440 14:09:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.440 14:09:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:43.440 14:09:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:43.440 14:09:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.440 14:09:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.440 14:09:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.440 14:09:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.440 14:09:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.440 14:09:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.440 14:09:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.441 14:09:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.441 14:09:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.441 14:09:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.441 14:09:34 -- paths/export.sh@5 -- # export PATH 00:27:43.441 14:09:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.441 14:09:34 -- nvmf/common.sh@46 -- # : 0 00:27:43.441 14:09:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:43.441 14:09:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:43.441 14:09:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:43.441 14:09:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.441 14:09:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.441 14:09:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:43.441 14:09:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:43.441 14:09:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:43.441 14:09:34 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:43.441 14:09:34 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:43.441 14:09:34 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:43.441 14:09:34 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:43.441 14:09:34 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:43.441 14:09:34 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:43.441 14:09:34 -- host/discovery.sh@25 -- # nvmftestinit 00:27:43.441 14:09:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:43.441 14:09:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.441 14:09:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:43.441 14:09:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:43.441 14:09:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:43.441 14:09:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.441 14:09:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.441 14:09:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.441 14:09:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:43.441 14:09:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:43.441 14:09:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:43.441 14:09:34 -- common/autotest_common.sh@10 -- # set +x 00:27:48.718 14:09:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:48.718 14:09:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:48.718 14:09:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:48.718 14:09:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:48.718 14:09:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:48.718 14:09:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:48.718 14:09:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:48.718 14:09:38 -- nvmf/common.sh@294 -- # net_devs=() 00:27:48.718 14:09:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:48.718 14:09:38 -- nvmf/common.sh@295 -- # e810=() 00:27:48.718 14:09:38 -- nvmf/common.sh@295 -- # local -ga e810 00:27:48.718 14:09:38 -- nvmf/common.sh@296 -- # x722=() 00:27:48.718 14:09:38 -- nvmf/common.sh@296 -- # local -ga x722 00:27:48.718 14:09:38 -- nvmf/common.sh@297 -- # mlx=() 00:27:48.718 14:09:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:48.718 14:09:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:48.718 14:09:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:48.718 14:09:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:48.718 14:09:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:48.718 14:09:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.718 14:09:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:48.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:48.718 14:09:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:48.718 14:09:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:48.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:48.718 14:09:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:48.718 14:09:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.718 14:09:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.718 14:09:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.718 14:09:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.718 14:09:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:48.718 Found net devices under 0000:86:00.0: cvl_0_0 00:27:48.718 14:09:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.718 14:09:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:48.718 14:09:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:48.718 14:09:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:48.718 14:09:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:48.718 14:09:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:48.718 Found net devices under 0000:86:00.1: cvl_0_1 00:27:48.718 14:09:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:48.718 14:09:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:48.718 14:09:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:48.718 14:09:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:48.718 14:09:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.718 14:09:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.718 14:09:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:48.718 14:09:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:48.718 14:09:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:48.718 14:09:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:48.718 14:09:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:48.718 14:09:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:48.718 14:09:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.718 14:09:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:48.718 14:09:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:48.718 14:09:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:48.718 14:09:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:48.718 14:09:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:48.718 14:09:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:48.718 14:09:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:48.718 14:09:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:48.718 14:09:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:48.718 14:09:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:48.718 14:09:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:48.718 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.718 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:27:48.718 00:27:48.718 --- 10.0.0.2 ping statistics --- 00:27:48.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.718 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:27:48.718 14:09:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:48.718 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.718 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:27:48.718 00:27:48.718 --- 10.0.0.1 ping statistics --- 00:27:48.718 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.718 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:27:48.718 14:09:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.718 14:09:38 -- nvmf/common.sh@410 -- # return 0 00:27:48.718 14:09:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:48.718 14:09:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.718 14:09:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:48.718 14:09:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.718 14:09:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:48.718 14:09:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:48.719 14:09:38 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:48.719 14:09:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:48.719 14:09:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:48.719 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:27:48.719 14:09:38 -- nvmf/common.sh@469 -- # nvmfpid=3409018 00:27:48.719 14:09:38 -- nvmf/common.sh@470 -- # waitforlisten 3409018 00:27:48.719 14:09:38 -- common/autotest_common.sh@819 -- # '[' -z 3409018 ']' 00:27:48.719 14:09:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:48.719 14:09:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.719 14:09:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:48.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:48.719 14:09:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.719 14:09:38 -- common/autotest_common.sh@10 -- # set +x 00:27:48.719 14:09:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:48.719 [2024-07-23 14:09:38.979839] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:48.719 [2024-07-23 14:09:38.979881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:48.719 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.719 [2024-07-23 14:09:39.037667] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.719 [2024-07-23 14:09:39.117912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:48.719 [2024-07-23 14:09:39.118017] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:48.719 [2024-07-23 14:09:39.118024] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:48.719 [2024-07-23 14:09:39.118031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:48.719 [2024-07-23 14:09:39.118052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.978 14:09:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:48.978 14:09:39 -- common/autotest_common.sh@852 -- # return 0 00:27:48.978 14:09:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:48.978 14:09:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 14:09:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:48.978 14:09:39 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:48.978 14:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 [2024-07-23 14:09:39.807292] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:48.978 14:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.978 14:09:39 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:48.978 14:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 [2024-07-23 14:09:39.815430] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:48.978 14:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.978 14:09:39 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:48.978 14:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 null0 00:27:48.978 14:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.978 14:09:39 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:48.978 14:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 null1 00:27:48.978 14:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.978 14:09:39 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:48.978 14:09:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 14:09:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.978 14:09:39 -- host/discovery.sh@45 -- # hostpid=3409088 00:27:48.978 14:09:39 -- host/discovery.sh@46 -- # waitforlisten 3409088 /tmp/host.sock 00:27:48.978 14:09:39 -- common/autotest_common.sh@819 -- # '[' -z 3409088 ']' 00:27:48.978 14:09:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:27:48.978 14:09:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:48.978 14:09:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:48.978 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:48.978 14:09:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:48.978 14:09:39 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:48.978 14:09:39 -- common/autotest_common.sh@10 -- # set +x 00:27:48.978 [2024-07-23 14:09:39.884138] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:27:48.978 [2024-07-23 14:09:39.884180] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3409088 ] 00:27:48.978 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.978 [2024-07-23 14:09:39.938471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:49.237 [2024-07-23 14:09:40.020134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:49.237 [2024-07-23 14:09:40.020247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:49.806 14:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:49.806 14:09:40 -- common/autotest_common.sh@852 -- # return 0 00:27:49.806 14:09:40 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:49.806 14:09:40 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:49.806 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.806 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.806 14:09:40 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:49.806 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.806 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.806 14:09:40 -- host/discovery.sh@72 -- # notify_id=0 00:27:49.806 14:09:40 -- host/discovery.sh@78 -- # get_subsystem_names 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # sort 00:27:49.806 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # xargs 00:27:49.806 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.806 14:09:40 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:27:49.806 14:09:40 -- host/discovery.sh@79 -- # get_bdev_list 00:27:49.806 14:09:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:49.806 14:09:40 -- host/discovery.sh@55 -- # xargs 00:27:49.806 14:09:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:49.806 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.806 14:09:40 -- host/discovery.sh@55 -- # sort 00:27:49.806 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.806 14:09:40 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:27:49.806 14:09:40 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:49.806 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.806 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:49.806 14:09:40 -- host/discovery.sh@82 -- # get_subsystem_names 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # xargs 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:49.806 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:49.806 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.806 14:09:40 -- host/discovery.sh@59 -- # sort 00:27:49.806 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.065 14:09:40 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:27:50.065 14:09:40 -- host/discovery.sh@83 -- # get_bdev_list 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:50.065 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # sort 00:27:50.065 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # xargs 00:27:50.065 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.065 14:09:40 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:50.065 14:09:40 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:50.065 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.065 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:50.065 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.065 14:09:40 -- host/discovery.sh@86 -- # get_subsystem_names 00:27:50.065 14:09:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:50.065 14:09:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:50.065 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.065 14:09:40 -- host/discovery.sh@59 -- # sort 00:27:50.065 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:50.065 14:09:40 -- host/discovery.sh@59 -- # xargs 00:27:50.065 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.065 14:09:40 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:27:50.065 14:09:40 -- host/discovery.sh@87 -- # get_bdev_list 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.065 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # sort 00:27:50.065 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:50.065 14:09:40 -- host/discovery.sh@55 -- # xargs 00:27:50.065 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.065 14:09:40 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:50.065 14:09:40 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:50.065 14:09:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.065 14:09:40 -- common/autotest_common.sh@10 -- # set +x 00:27:50.065 [2024-07-23 14:09:40.994571] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:50.065 14:09:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.065 14:09:40 -- host/discovery.sh@92 -- # get_subsystem_names 00:27:50.065 14:09:41 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:50.065 14:09:41 -- host/discovery.sh@59 -- # xargs 00:27:50.065 14:09:41 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:50.066 14:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.066 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:50.066 14:09:41 -- host/discovery.sh@59 -- # sort 00:27:50.066 14:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.066 14:09:41 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:50.066 14:09:41 -- host/discovery.sh@93 -- # get_bdev_list 00:27:50.066 14:09:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.066 14:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.066 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:50.066 14:09:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:50.066 14:09:41 -- host/discovery.sh@55 -- # sort 00:27:50.066 14:09:41 -- host/discovery.sh@55 -- # xargs 00:27:50.066 14:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.325 14:09:41 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:27:50.325 14:09:41 -- host/discovery.sh@94 -- # get_notification_count 00:27:50.325 14:09:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:50.325 14:09:41 -- host/discovery.sh@74 -- # jq '. | length' 00:27:50.325 14:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.325 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:50.325 14:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.325 14:09:41 -- host/discovery.sh@74 -- # notification_count=0 00:27:50.325 14:09:41 -- host/discovery.sh@75 -- # notify_id=0 00:27:50.325 14:09:41 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:27:50.325 14:09:41 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:50.325 14:09:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:50.325 14:09:41 -- common/autotest_common.sh@10 -- # set +x 00:27:50.325 14:09:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:50.325 14:09:41 -- host/discovery.sh@100 -- # sleep 1 00:27:50.894 [2024-07-23 14:09:41.747268] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:50.894 [2024-07-23 14:09:41.747292] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:50.894 [2024-07-23 14:09:41.747307] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:50.894 [2024-07-23 14:09:41.836579] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:51.153 [2024-07-23 14:09:41.937996] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:51.153 [2024-07-23 14:09:41.938014] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:51.153 14:09:42 -- host/discovery.sh@101 -- # get_subsystem_names 00:27:51.153 14:09:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:51.153 14:09:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:51.153 14:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.153 14:09:42 -- host/discovery.sh@59 -- # sort 00:27:51.153 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:51.153 14:09:42 -- host/discovery.sh@59 -- # xargs 00:27:51.153 14:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@102 -- # get_bdev_list 00:27:51.412 14:09:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.412 14:09:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:51.412 14:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.412 14:09:42 -- host/discovery.sh@55 -- # sort 00:27:51.412 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:51.412 14:09:42 -- host/discovery.sh@55 -- # xargs 00:27:51.412 14:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:27:51.412 14:09:42 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:51.412 14:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.412 14:09:42 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:51.412 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:51.412 14:09:42 -- host/discovery.sh@63 -- # sort -n 00:27:51.412 14:09:42 -- host/discovery.sh@63 -- # xargs 00:27:51.412 14:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@104 -- # get_notification_count 00:27:51.412 14:09:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:51.412 14:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.412 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:51.412 14:09:42 -- host/discovery.sh@74 -- # jq '. | length' 00:27:51.412 14:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@74 -- # notification_count=1 00:27:51.412 14:09:42 -- host/discovery.sh@75 -- # notify_id=1 00:27:51.412 14:09:42 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:51.412 14:09:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.412 14:09:42 -- common/autotest_common.sh@10 -- # set +x 00:27:51.412 14:09:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.412 14:09:42 -- host/discovery.sh@109 -- # sleep 1 00:27:52.350 14:09:43 -- host/discovery.sh@110 -- # get_bdev_list 00:27:52.350 14:09:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.350 14:09:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:52.350 14:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.350 14:09:43 -- host/discovery.sh@55 -- # sort 00:27:52.350 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:27:52.350 14:09:43 -- host/discovery.sh@55 -- # xargs 00:27:52.350 14:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.608 14:09:43 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:52.608 14:09:43 -- host/discovery.sh@111 -- # get_notification_count 00:27:52.608 14:09:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:52.608 14:09:43 -- host/discovery.sh@74 -- # jq '. | length' 00:27:52.608 14:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.608 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:27:52.608 14:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.608 14:09:43 -- host/discovery.sh@74 -- # notification_count=1 00:27:52.608 14:09:43 -- host/discovery.sh@75 -- # notify_id=2 00:27:52.608 14:09:43 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:27:52.608 14:09:43 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:52.608 14:09:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:52.608 14:09:43 -- common/autotest_common.sh@10 -- # set +x 00:27:52.608 [2024-07-23 14:09:43.441349] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:52.608 [2024-07-23 14:09:43.442293] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:52.608 [2024-07-23 14:09:43.442316] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:52.608 14:09:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:52.608 14:09:43 -- host/discovery.sh@117 -- # sleep 1 00:27:52.608 [2024-07-23 14:09:43.528563] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:52.872 [2024-07-23 14:09:43.758825] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:52.872 [2024-07-23 14:09:43.758842] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:52.872 [2024-07-23 14:09:43.758847] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:53.440 14:09:44 -- host/discovery.sh@118 -- # get_subsystem_names 00:27:53.440 14:09:44 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:53.440 14:09:44 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:53.440 14:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.440 14:09:44 -- host/discovery.sh@59 -- # sort 00:27:53.440 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.440 14:09:44 -- host/discovery.sh@59 -- # xargs 00:27:53.700 14:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@119 -- # get_bdev_list 00:27:53.700 14:09:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.700 14:09:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:53.700 14:09:44 -- host/discovery.sh@55 -- # sort 00:27:53.700 14:09:44 -- host/discovery.sh@55 -- # xargs 00:27:53.700 14:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.700 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.700 14:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:27:53.700 14:09:44 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:53.700 14:09:44 -- host/discovery.sh@63 -- # xargs 00:27:53.700 14:09:44 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:53.700 14:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.700 14:09:44 -- host/discovery.sh@63 -- # sort -n 00:27:53.700 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.700 14:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@121 -- # get_notification_count 00:27:53.700 14:09:44 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:53.700 14:09:44 -- host/discovery.sh@74 -- # jq '. | length' 00:27:53.700 14:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.700 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.700 14:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@74 -- # notification_count=0 00:27:53.700 14:09:44 -- host/discovery.sh@75 -- # notify_id=2 00:27:53.700 14:09:44 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:27:53.700 14:09:44 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:53.700 14:09:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:53.701 14:09:44 -- common/autotest_common.sh@10 -- # set +x 00:27:53.701 [2024-07-23 14:09:44.645426] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:53.701 [2024-07-23 14:09:44.645448] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:53.701 14:09:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:53.701 14:09:44 -- host/discovery.sh@127 -- # sleep 1 00:27:53.701 [2024-07-23 14:09:44.652304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.701 [2024-07-23 14:09:44.652325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.701 [2024-07-23 14:09:44.652333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.701 [2024-07-23 14:09:44.652340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.701 [2024-07-23 14:09:44.652351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.701 [2024-07-23 14:09:44.652357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.701 [2024-07-23 14:09:44.652365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:53.701 [2024-07-23 14:09:44.652372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:53.701 [2024-07-23 14:09:44.652378] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.701 [2024-07-23 14:09:44.662318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.701 [2024-07-23 14:09:44.672357] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.701 [2024-07-23 14:09:44.672821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.673188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.673200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc419f0 with addr=10.0.0.2, port=4420 00:27:53.701 [2024-07-23 14:09:44.673208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.701 [2024-07-23 14:09:44.673220] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.701 [2024-07-23 14:09:44.673244] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.701 [2024-07-23 14:09:44.673252] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.701 [2024-07-23 14:09:44.673259] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.701 [2024-07-23 14:09:44.673270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.701 [2024-07-23 14:09:44.682410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.701 [2024-07-23 14:09:44.682791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.683193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.683205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc419f0 with addr=10.0.0.2, port=4420 00:27:53.701 [2024-07-23 14:09:44.683212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.701 [2024-07-23 14:09:44.683223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.701 [2024-07-23 14:09:44.683232] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.701 [2024-07-23 14:09:44.683238] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.701 [2024-07-23 14:09:44.683245] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.701 [2024-07-23 14:09:44.683254] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.701 [2024-07-23 14:09:44.692460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.701 [2024-07-23 14:09:44.692868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.693221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.693234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc419f0 with addr=10.0.0.2, port=4420 00:27:53.701 [2024-07-23 14:09:44.693244] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.701 [2024-07-23 14:09:44.693255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.701 [2024-07-23 14:09:44.693264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.701 [2024-07-23 14:09:44.693270] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.701 [2024-07-23 14:09:44.693277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.701 [2024-07-23 14:09:44.693286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.701 [2024-07-23 14:09:44.702514] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.701 [2024-07-23 14:09:44.702982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.703385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.703397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc419f0 with addr=10.0.0.2, port=4420 00:27:53.701 [2024-07-23 14:09:44.703404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.701 [2024-07-23 14:09:44.703414] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.701 [2024-07-23 14:09:44.703430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.701 [2024-07-23 14:09:44.703437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.701 [2024-07-23 14:09:44.703443] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.701 [2024-07-23 14:09:44.703452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.701 [2024-07-23 14:09:44.712563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.701 [2024-07-23 14:09:44.713005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.713410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.701 [2024-07-23 14:09:44.713422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc419f0 with addr=10.0.0.2, port=4420 00:27:53.701 [2024-07-23 14:09:44.713439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.701 [2024-07-23 14:09:44.713450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.701 [2024-07-23 14:09:44.713470] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.701 [2024-07-23 14:09:44.713478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.701 [2024-07-23 14:09:44.713484] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.701 [2024-07-23 14:09:44.713493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.961 [2024-07-23 14:09:44.722610] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:53.961 [2024-07-23 14:09:44.723068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-07-23 14:09:44.723496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.961 [2024-07-23 14:09:44.723507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc419f0 with addr=10.0.0.2, port=4420 00:27:53.961 [2024-07-23 14:09:44.723514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc419f0 is same with the state(5) to be set 00:27:53.961 [2024-07-23 14:09:44.723527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc419f0 (9): Bad file descriptor 00:27:53.961 [2024-07-23 14:09:44.723543] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:53.961 [2024-07-23 14:09:44.723550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:53.961 [2024-07-23 14:09:44.723556] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:53.961 [2024-07-23 14:09:44.723565] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:53.961 [2024-07-23 14:09:44.732573] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:53.961 [2024-07-23 14:09:44.732589] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:54.899 14:09:45 -- host/discovery.sh@128 -- # get_subsystem_names 00:27:54.899 14:09:45 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:54.899 14:09:45 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:54.899 14:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.899 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.899 14:09:45 -- host/discovery.sh@59 -- # sort 00:27:54.899 14:09:45 -- host/discovery.sh@59 -- # xargs 00:27:54.899 14:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@129 -- # get_bdev_list 00:27:54.899 14:09:45 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.899 14:09:45 -- host/discovery.sh@55 -- # xargs 00:27:54.899 14:09:45 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:54.899 14:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.899 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.899 14:09:45 -- host/discovery.sh@55 -- # sort 00:27:54.899 14:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:27:54.899 14:09:45 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:54.899 14:09:45 -- host/discovery.sh@63 -- # xargs 00:27:54.899 14:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.899 14:09:45 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:54.899 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.899 14:09:45 -- host/discovery.sh@63 -- # sort -n 00:27:54.899 14:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@131 -- # get_notification_count 00:27:54.899 14:09:45 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:54.899 14:09:45 -- host/discovery.sh@74 -- # jq '. | length' 00:27:54.899 14:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.899 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.899 14:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@74 -- # notification_count=0 00:27:54.899 14:09:45 -- host/discovery.sh@75 -- # notify_id=2 00:27:54.899 14:09:45 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:54.899 14:09:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:54.899 14:09:45 -- common/autotest_common.sh@10 -- # set +x 00:27:54.899 14:09:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:54.899 14:09:45 -- host/discovery.sh@135 -- # sleep 1 00:27:56.279 14:09:46 -- host/discovery.sh@136 -- # get_subsystem_names 00:27:56.279 14:09:46 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:56.279 14:09:46 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:56.279 14:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.279 14:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.279 14:09:46 -- host/discovery.sh@59 -- # sort 00:27:56.279 14:09:46 -- host/discovery.sh@59 -- # xargs 00:27:56.279 14:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.279 14:09:46 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:27:56.279 14:09:46 -- host/discovery.sh@137 -- # get_bdev_list 00:27:56.279 14:09:46 -- host/discovery.sh@55 -- # sort 00:27:56.279 14:09:46 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:56.279 14:09:46 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:56.279 14:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.279 14:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.279 14:09:46 -- host/discovery.sh@55 -- # xargs 00:27:56.279 14:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.279 14:09:46 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:27:56.279 14:09:46 -- host/discovery.sh@138 -- # get_notification_count 00:27:56.279 14:09:46 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:56.279 14:09:46 -- host/discovery.sh@74 -- # jq '. | length' 00:27:56.279 14:09:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.279 14:09:46 -- common/autotest_common.sh@10 -- # set +x 00:27:56.279 14:09:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:56.279 14:09:47 -- host/discovery.sh@74 -- # notification_count=2 00:27:56.279 14:09:47 -- host/discovery.sh@75 -- # notify_id=4 00:27:56.279 14:09:47 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:27:56.279 14:09:47 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:56.279 14:09:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:56.279 14:09:47 -- common/autotest_common.sh@10 -- # set +x 00:27:57.216 [2024-07-23 14:09:48.021937] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:57.216 [2024-07-23 14:09:48.021955] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:57.216 [2024-07-23 14:09:48.021968] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:57.216 [2024-07-23 14:09:48.110305] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:57.216 [2024-07-23 14:09:48.217163] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:57.216 [2024-07-23 14:09:48.217189] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:57.216 14:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.216 14:09:48 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:57.216 14:09:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:57.216 14:09:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:57.216 14:09:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:57.216 14:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.216 14:09:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:57.216 14:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.216 14:09:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:57.216 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.216 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.475 request: 00:27:57.475 { 00:27:57.475 "name": "nvme", 00:27:57.475 "trtype": "tcp", 00:27:57.475 "traddr": "10.0.0.2", 00:27:57.475 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:57.475 "adrfam": "ipv4", 00:27:57.475 "trsvcid": "8009", 00:27:57.475 "wait_for_attach": true, 00:27:57.475 "method": "bdev_nvme_start_discovery", 00:27:57.475 "req_id": 1 00:27:57.475 } 00:27:57.475 Got JSON-RPC error response 00:27:57.475 response: 00:27:57.475 { 00:27:57.475 "code": -17, 00:27:57.475 "message": "File exists" 00:27:57.475 } 00:27:57.475 14:09:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:57.475 14:09:48 -- common/autotest_common.sh@643 -- # es=1 00:27:57.475 14:09:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:57.475 14:09:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:57.475 14:09:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:57.475 14:09:48 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:57.475 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # sort 00:27:57.475 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # xargs 00:27:57.475 14:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.475 14:09:48 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:27:57.475 14:09:48 -- host/discovery.sh@147 -- # get_bdev_list 00:27:57.475 14:09:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.475 14:09:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:57.475 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.475 14:09:48 -- host/discovery.sh@55 -- # sort 00:27:57.475 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.475 14:09:48 -- host/discovery.sh@55 -- # xargs 00:27:57.475 14:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.475 14:09:48 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:57.475 14:09:48 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:57.475 14:09:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:57.475 14:09:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:57.475 14:09:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:57.475 14:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.475 14:09:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:57.475 14:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.475 14:09:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:57.475 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.475 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.475 request: 00:27:57.475 { 00:27:57.475 "name": "nvme_second", 00:27:57.475 "trtype": "tcp", 00:27:57.475 "traddr": "10.0.0.2", 00:27:57.475 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:57.475 "adrfam": "ipv4", 00:27:57.475 "trsvcid": "8009", 00:27:57.475 "wait_for_attach": true, 00:27:57.475 "method": "bdev_nvme_start_discovery", 00:27:57.475 "req_id": 1 00:27:57.475 } 00:27:57.475 Got JSON-RPC error response 00:27:57.475 response: 00:27:57.475 { 00:27:57.475 "code": -17, 00:27:57.475 "message": "File exists" 00:27:57.475 } 00:27:57.475 14:09:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:27:57.475 14:09:48 -- common/autotest_common.sh@643 -- # es=1 00:27:57.475 14:09:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:27:57.475 14:09:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:27:57.475 14:09:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:27:57.475 14:09:48 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:57.475 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.475 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # sort 00:27:57.475 14:09:48 -- host/discovery.sh@67 -- # xargs 00:27:57.475 14:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.475 14:09:48 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:27:57.475 14:09:48 -- host/discovery.sh@153 -- # get_bdev_list 00:27:57.476 14:09:48 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:57.476 14:09:48 -- host/discovery.sh@55 -- # xargs 00:27:57.476 14:09:48 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:57.476 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.476 14:09:48 -- host/discovery.sh@55 -- # sort 00:27:57.476 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:57.476 14:09:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:57.476 14:09:48 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:57.476 14:09:48 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:57.476 14:09:48 -- common/autotest_common.sh@640 -- # local es=0 00:27:57.476 14:09:48 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:57.476 14:09:48 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:27:57.476 14:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.476 14:09:48 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:27:57.476 14:09:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:27:57.476 14:09:48 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:57.476 14:09:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:57.476 14:09:48 -- common/autotest_common.sh@10 -- # set +x 00:27:58.444 [2024-07-23 14:09:49.457213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.444 [2024-07-23 14:09:49.457615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:58.444 [2024-07-23 14:09:49.457628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4a960 with addr=10.0.0.2, port=8010 00:27:58.444 [2024-07-23 14:09:49.457643] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:58.444 [2024-07-23 14:09:49.457650] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:58.444 [2024-07-23 14:09:49.457656] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:59.824 [2024-07-23 14:09:50.459610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.824 [2024-07-23 14:09:50.460016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:59.824 [2024-07-23 14:09:50.460028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc4a960 with addr=10.0.0.2, port=8010 00:27:59.824 [2024-07-23 14:09:50.460050] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:59.824 [2024-07-23 14:09:50.460057] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:59.824 [2024-07-23 14:09:50.460064] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:00.762 [2024-07-23 14:09:51.461616] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:00.762 request: 00:28:00.762 { 00:28:00.762 "name": "nvme_second", 00:28:00.762 "trtype": "tcp", 00:28:00.762 "traddr": "10.0.0.2", 00:28:00.762 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:00.762 "adrfam": "ipv4", 00:28:00.762 "trsvcid": "8010", 00:28:00.762 "attach_timeout_ms": 3000, 00:28:00.762 "method": "bdev_nvme_start_discovery", 00:28:00.762 "req_id": 1 00:28:00.762 } 00:28:00.762 Got JSON-RPC error response 00:28:00.762 response: 00:28:00.762 { 00:28:00.762 "code": -110, 00:28:00.762 "message": "Connection timed out" 00:28:00.762 } 00:28:00.762 14:09:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:00.762 14:09:51 -- common/autotest_common.sh@643 -- # es=1 00:28:00.762 14:09:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:00.762 14:09:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:00.762 14:09:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:00.762 14:09:51 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:00.762 14:09:51 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:00.762 14:09:51 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:00.762 14:09:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:00.762 14:09:51 -- host/discovery.sh@67 -- # sort 00:28:00.762 14:09:51 -- common/autotest_common.sh@10 -- # set +x 00:28:00.762 14:09:51 -- host/discovery.sh@67 -- # xargs 00:28:00.762 14:09:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:00.762 14:09:51 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:00.762 14:09:51 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:00.762 14:09:51 -- host/discovery.sh@162 -- # kill 3409088 00:28:00.762 14:09:51 -- host/discovery.sh@163 -- # nvmftestfini 00:28:00.762 14:09:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:00.762 14:09:51 -- nvmf/common.sh@116 -- # sync 00:28:00.762 14:09:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:00.762 14:09:51 -- nvmf/common.sh@119 -- # set +e 00:28:00.762 14:09:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:00.762 14:09:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:00.762 rmmod nvme_tcp 00:28:00.762 rmmod nvme_fabrics 00:28:00.762 rmmod nvme_keyring 00:28:00.762 14:09:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:00.762 14:09:51 -- nvmf/common.sh@123 -- # set -e 00:28:00.762 14:09:51 -- nvmf/common.sh@124 -- # return 0 00:28:00.762 14:09:51 -- nvmf/common.sh@477 -- # '[' -n 3409018 ']' 00:28:00.762 14:09:51 -- nvmf/common.sh@478 -- # killprocess 3409018 00:28:00.762 14:09:51 -- common/autotest_common.sh@926 -- # '[' -z 3409018 ']' 00:28:00.762 14:09:51 -- common/autotest_common.sh@930 -- # kill -0 3409018 00:28:00.762 14:09:51 -- common/autotest_common.sh@931 -- # uname 00:28:00.762 14:09:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:00.762 14:09:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3409018 00:28:00.762 14:09:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:00.762 14:09:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:00.762 14:09:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3409018' 00:28:00.762 killing process with pid 3409018 00:28:00.762 14:09:51 -- common/autotest_common.sh@945 -- # kill 3409018 00:28:00.762 14:09:51 -- common/autotest_common.sh@950 -- # wait 3409018 00:28:01.021 14:09:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:01.021 14:09:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:01.021 14:09:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:01.021 14:09:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:01.021 14:09:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:01.021 14:09:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:01.021 14:09:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:01.021 14:09:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.927 14:09:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:02.927 00:28:02.927 real 0m19.919s 00:28:02.927 user 0m27.283s 00:28:02.927 sys 0m5.036s 00:28:02.927 14:09:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.927 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.927 ************************************ 00:28:02.927 END TEST nvmf_discovery 00:28:02.927 ************************************ 00:28:02.927 14:09:53 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:02.927 14:09:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:02.927 14:09:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:02.928 14:09:53 -- common/autotest_common.sh@10 -- # set +x 00:28:02.928 ************************************ 00:28:02.928 START TEST nvmf_discovery_remove_ifc 00:28:02.928 ************************************ 00:28:02.928 14:09:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:03.187 * Looking for test storage... 00:28:03.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.187 14:09:54 -- nvmf/common.sh@7 -- # uname -s 00:28:03.187 14:09:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.187 14:09:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.187 14:09:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.187 14:09:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.187 14:09:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.187 14:09:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.187 14:09:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.187 14:09:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.187 14:09:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.187 14:09:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.187 14:09:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.187 14:09:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:03.187 14:09:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.187 14:09:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.187 14:09:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.187 14:09:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.187 14:09:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.187 14:09:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.187 14:09:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.187 14:09:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.187 14:09:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.187 14:09:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.187 14:09:54 -- paths/export.sh@5 -- # export PATH 00:28:03.187 14:09:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.187 14:09:54 -- nvmf/common.sh@46 -- # : 0 00:28:03.187 14:09:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:03.187 14:09:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:03.187 14:09:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:03.187 14:09:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.187 14:09:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.187 14:09:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:03.187 14:09:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:03.187 14:09:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:03.187 14:09:54 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:03.187 14:09:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:03.187 14:09:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.187 14:09:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:03.187 14:09:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:03.187 14:09:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:03.187 14:09:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.187 14:09:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.187 14:09:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.187 14:09:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:03.187 14:09:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:03.187 14:09:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:03.187 14:09:54 -- common/autotest_common.sh@10 -- # set +x 00:28:08.463 14:09:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:08.463 14:09:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:08.463 14:09:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:08.463 14:09:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:08.463 14:09:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:08.463 14:09:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:08.463 14:09:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:08.463 14:09:58 -- nvmf/common.sh@294 -- # net_devs=() 00:28:08.463 14:09:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:08.463 14:09:58 -- nvmf/common.sh@295 -- # e810=() 00:28:08.463 14:09:58 -- nvmf/common.sh@295 -- # local -ga e810 00:28:08.463 14:09:58 -- nvmf/common.sh@296 -- # x722=() 00:28:08.463 14:09:58 -- nvmf/common.sh@296 -- # local -ga x722 00:28:08.463 14:09:58 -- nvmf/common.sh@297 -- # mlx=() 00:28:08.463 14:09:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:08.463 14:09:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.463 14:09:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.464 14:09:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:08.464 14:09:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:08.464 14:09:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:08.464 14:09:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:08.464 14:09:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:08.464 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:08.464 14:09:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:08.464 14:09:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:08.464 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:08.464 14:09:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:08.464 14:09:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:08.464 14:09:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.464 14:09:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:08.464 14:09:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.464 14:09:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:08.464 Found net devices under 0000:86:00.0: cvl_0_0 00:28:08.464 14:09:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.464 14:09:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:08.464 14:09:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.464 14:09:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:08.464 14:09:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.464 14:09:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:08.464 Found net devices under 0000:86:00.1: cvl_0_1 00:28:08.464 14:09:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.464 14:09:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:08.464 14:09:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:08.464 14:09:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:08.464 14:09:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:08.464 14:09:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.464 14:09:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.464 14:09:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.464 14:09:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:08.464 14:09:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.464 14:09:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.464 14:09:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:08.464 14:09:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.464 14:09:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.464 14:09:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:08.464 14:09:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:08.464 14:09:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.464 14:09:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.464 14:09:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.464 14:09:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:08.464 14:09:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:08.464 14:09:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:08.464 14:09:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:08.464 14:09:59 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:08.464 14:09:59 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:08.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:08.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:28:08.464 00:28:08.464 --- 10.0.0.2 ping statistics --- 00:28:08.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.464 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:28:08.464 14:09:59 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:08.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:08.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:28:08.464 00:28:08.464 --- 10.0.0.1 ping statistics --- 00:28:08.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:08.464 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:28:08.464 14:09:59 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:08.464 14:09:59 -- nvmf/common.sh@410 -- # return 0 00:28:08.464 14:09:59 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:08.464 14:09:59 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:08.464 14:09:59 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:08.464 14:09:59 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:08.464 14:09:59 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:08.464 14:09:59 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:08.464 14:09:59 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:08.464 14:09:59 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:08.464 14:09:59 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:08.464 14:09:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:08.464 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:28:08.464 14:09:59 -- nvmf/common.sh@469 -- # nvmfpid=3414614 00:28:08.464 14:09:59 -- nvmf/common.sh@470 -- # waitforlisten 3414614 00:28:08.464 14:09:59 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:08.464 14:09:59 -- common/autotest_common.sh@819 -- # '[' -z 3414614 ']' 00:28:08.464 14:09:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:08.464 14:09:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:08.464 14:09:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:08.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:08.464 14:09:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:08.464 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:28:08.464 [2024-07-23 14:09:59.172079] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:08.464 [2024-07-23 14:09:59.172126] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:08.464 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.464 [2024-07-23 14:09:59.231053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:08.464 [2024-07-23 14:09:59.302586] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:08.464 [2024-07-23 14:09:59.302696] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:08.464 [2024-07-23 14:09:59.302704] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:08.464 [2024-07-23 14:09:59.302709] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:08.464 [2024-07-23 14:09:59.302729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.033 14:09:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:09.033 14:09:59 -- common/autotest_common.sh@852 -- # return 0 00:28:09.033 14:09:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:09.033 14:09:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:09.033 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.033 14:09:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.033 14:09:59 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:09.033 14:09:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:09.033 14:09:59 -- common/autotest_common.sh@10 -- # set +x 00:28:09.033 [2024-07-23 14:10:00.000412] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.033 [2024-07-23 14:10:00.008576] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:09.033 null0 00:28:09.033 [2024-07-23 14:10:00.040611] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.294 14:10:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:09.294 14:10:00 -- host/discovery_remove_ifc.sh@59 -- # hostpid=3414819 00:28:09.294 14:10:00 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:09.294 14:10:00 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3414819 /tmp/host.sock 00:28:09.294 14:10:00 -- common/autotest_common.sh@819 -- # '[' -z 3414819 ']' 00:28:09.294 14:10:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:09.294 14:10:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:09.294 14:10:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:09.294 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:09.294 14:10:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:09.294 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:28:09.294 [2024-07-23 14:10:00.105559] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:09.294 [2024-07-23 14:10:00.105604] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3414819 ] 00:28:09.294 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.294 [2024-07-23 14:10:00.157475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.294 [2024-07-23 14:10:00.234774] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:09.294 [2024-07-23 14:10:00.234891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:10.233 14:10:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:10.233 14:10:00 -- common/autotest_common.sh@852 -- # return 0 00:28:10.233 14:10:00 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:10.233 14:10:00 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:10.233 14:10:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.233 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:28:10.233 14:10:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.233 14:10:00 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:10.233 14:10:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.233 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:28:10.233 14:10:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:10.233 14:10:00 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:10.233 14:10:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:10.233 14:10:00 -- common/autotest_common.sh@10 -- # set +x 00:28:11.171 [2024-07-23 14:10:02.050280] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:11.171 [2024-07-23 14:10:02.050303] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:11.171 [2024-07-23 14:10:02.050318] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:11.171 [2024-07-23 14:10:02.136571] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:11.431 [2024-07-23 14:10:02.240809] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:11.431 [2024-07-23 14:10:02.240845] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:11.431 [2024-07-23 14:10:02.240864] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:11.431 [2024-07-23 14:10:02.240876] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:11.431 [2024-07-23 14:10:02.240895] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:11.431 14:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.431 14:10:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.431 14:10:02 -- common/autotest_common.sh@10 -- # set +x 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.431 14:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:11.431 [2024-07-23 14:10:02.290807] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x2407980 was disconnected and freed. delete nvme_qpair. 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:11.431 14:10:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:11.431 14:10:02 -- common/autotest_common.sh@10 -- # set +x 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:11.431 14:10:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:11.431 14:10:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:12.812 14:10:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:12.812 14:10:03 -- common/autotest_common.sh@10 -- # set +x 00:28:12.812 14:10:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:12.812 14:10:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:13.751 14:10:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:13.751 14:10:04 -- common/autotest_common.sh@10 -- # set +x 00:28:13.751 14:10:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:13.751 14:10:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:14.692 14:10:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.692 14:10:05 -- common/autotest_common.sh@10 -- # set +x 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:14.692 14:10:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:14.692 14:10:05 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:15.632 14:10:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:15.632 14:10:06 -- common/autotest_common.sh@10 -- # set +x 00:28:15.632 14:10:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:15.632 14:10:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.014 14:10:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.014 14:10:07 -- common/autotest_common.sh@10 -- # set +x 00:28:17.014 14:10:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:17.014 [2024-07-23 14:10:07.682157] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:17.014 [2024-07-23 14:10:07.682193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.014 [2024-07-23 14:10:07.682205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.014 [2024-07-23 14:10:07.682214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.014 [2024-07-23 14:10:07.682221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.014 [2024-07-23 14:10:07.682228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.014 [2024-07-23 14:10:07.682235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.014 [2024-07-23 14:10:07.682243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.014 [2024-07-23 14:10:07.682249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.014 [2024-07-23 14:10:07.682256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:17.014 [2024-07-23 14:10:07.682263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:17.014 [2024-07-23 14:10:07.682273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cec50 is same with the state(5) to be set 00:28:17.014 [2024-07-23 14:10:07.692178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cec50 (9): Bad file descriptor 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:17.014 14:10:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:17.014 [2024-07-23 14:10:07.702219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:17.952 14:10:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:17.952 [2024-07-23 14:10:08.706063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:17.952 14:10:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:17.952 14:10:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:17.952 14:10:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:17.952 14:10:08 -- common/autotest_common.sh@10 -- # set +x 00:28:17.952 14:10:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:17.952 14:10:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:18.890 [2024-07-23 14:10:09.730060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:18.890 [2024-07-23 14:10:09.730102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23cec50 with addr=10.0.0.2, port=4420 00:28:18.890 [2024-07-23 14:10:09.730117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23cec50 is same with the state(5) to be set 00:28:18.890 [2024-07-23 14:10:09.730138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:18.890 [2024-07-23 14:10:09.730148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:18.890 [2024-07-23 14:10:09.730157] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:18.890 [2024-07-23 14:10:09.730168] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:28:18.890 [2024-07-23 14:10:09.730524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cec50 (9): Bad file descriptor 00:28:18.890 [2024-07-23 14:10:09.730549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:18.891 [2024-07-23 14:10:09.730573] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:18.891 [2024-07-23 14:10:09.730595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.891 [2024-07-23 14:10:09.730608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.891 [2024-07-23 14:10:09.730619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.891 [2024-07-23 14:10:09.730629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.891 [2024-07-23 14:10:09.730638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.891 [2024-07-23 14:10:09.730648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.891 [2024-07-23 14:10:09.730659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.891 [2024-07-23 14:10:09.730669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.891 [2024-07-23 14:10:09.730680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:18.891 [2024-07-23 14:10:09.730693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:18.891 [2024-07-23 14:10:09.730708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:18.891 [2024-07-23 14:10:09.731141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23ce140 (9): Bad file descriptor 00:28:18.891 [2024-07-23 14:10:09.732157] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:18.891 [2024-07-23 14:10:09.732173] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:18.891 14:10:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:18.891 14:10:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:18.891 14:10:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:19.828 14:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:19.828 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:28:19.828 14:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:19.828 14:10:10 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:20.088 14:10:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:20.088 14:10:10 -- common/autotest_common.sh@10 -- # set +x 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:20.088 14:10:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:20.088 14:10:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:21.070 [2024-07-23 14:10:11.744781] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:21.070 [2024-07-23 14:10:11.744806] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:21.071 [2024-07-23 14:10:11.744821] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:21.071 [2024-07-23 14:10:11.873291] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:21.071 14:10:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:21.071 14:10:11 -- common/autotest_common.sh@10 -- # set +x 00:28:21.071 14:10:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:21.071 14:10:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:21.071 [2024-07-23 14:10:12.057402] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:21.071 [2024-07-23 14:10:12.057440] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:21.071 [2024-07-23 14:10:12.057461] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:21.071 [2024-07-23 14:10:12.057475] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:21.071 [2024-07-23 14:10:12.057485] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:21.071 [2024-07-23 14:10:12.064149] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23dd7a0 was disconnected and freed. delete nvme_qpair. 00:28:22.008 14:10:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:22.008 14:10:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:22.008 14:10:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:22.008 14:10:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:22.008 14:10:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:22.008 14:10:12 -- common/autotest_common.sh@10 -- # set +x 00:28:22.008 14:10:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:22.008 14:10:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:22.267 14:10:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:28:22.267 14:10:13 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:28:22.268 14:10:13 -- host/discovery_remove_ifc.sh@90 -- # killprocess 3414819 00:28:22.268 14:10:13 -- common/autotest_common.sh@926 -- # '[' -z 3414819 ']' 00:28:22.268 14:10:13 -- common/autotest_common.sh@930 -- # kill -0 3414819 00:28:22.268 14:10:13 -- common/autotest_common.sh@931 -- # uname 00:28:22.268 14:10:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:22.268 14:10:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3414819 00:28:22.268 14:10:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:22.268 14:10:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:22.268 14:10:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3414819' 00:28:22.268 killing process with pid 3414819 00:28:22.268 14:10:13 -- common/autotest_common.sh@945 -- # kill 3414819 00:28:22.268 14:10:13 -- common/autotest_common.sh@950 -- # wait 3414819 00:28:22.268 14:10:13 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:28:22.268 14:10:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:22.268 14:10:13 -- nvmf/common.sh@116 -- # sync 00:28:22.268 14:10:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:22.268 14:10:13 -- nvmf/common.sh@119 -- # set +e 00:28:22.268 14:10:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:22.268 14:10:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:22.532 rmmod nvme_tcp 00:28:22.532 rmmod nvme_fabrics 00:28:22.532 rmmod nvme_keyring 00:28:22.532 14:10:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:22.532 14:10:13 -- nvmf/common.sh@123 -- # set -e 00:28:22.532 14:10:13 -- nvmf/common.sh@124 -- # return 0 00:28:22.532 14:10:13 -- nvmf/common.sh@477 -- # '[' -n 3414614 ']' 00:28:22.532 14:10:13 -- nvmf/common.sh@478 -- # killprocess 3414614 00:28:22.532 14:10:13 -- common/autotest_common.sh@926 -- # '[' -z 3414614 ']' 00:28:22.532 14:10:13 -- common/autotest_common.sh@930 -- # kill -0 3414614 00:28:22.532 14:10:13 -- common/autotest_common.sh@931 -- # uname 00:28:22.532 14:10:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:22.532 14:10:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3414614 00:28:22.532 14:10:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:22.532 14:10:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:22.532 14:10:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3414614' 00:28:22.532 killing process with pid 3414614 00:28:22.532 14:10:13 -- common/autotest_common.sh@945 -- # kill 3414614 00:28:22.532 14:10:13 -- common/autotest_common.sh@950 -- # wait 3414614 00:28:22.793 14:10:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:22.793 14:10:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:22.793 14:10:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:22.793 14:10:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:22.793 14:10:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:22.793 14:10:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:22.793 14:10:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:22.793 14:10:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.700 14:10:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:24.700 00:28:24.700 real 0m21.706s 00:28:24.700 user 0m27.377s 00:28:24.700 sys 0m4.995s 00:28:24.700 14:10:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:24.700 14:10:15 -- common/autotest_common.sh@10 -- # set +x 00:28:24.700 ************************************ 00:28:24.700 END TEST nvmf_discovery_remove_ifc 00:28:24.700 ************************************ 00:28:24.700 14:10:15 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:24.701 14:10:15 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:24.701 14:10:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:24.701 14:10:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:24.701 14:10:15 -- common/autotest_common.sh@10 -- # set +x 00:28:24.701 ************************************ 00:28:24.701 START TEST nvmf_digest 00:28:24.701 ************************************ 00:28:24.701 14:10:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:24.961 * Looking for test storage... 00:28:24.961 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:24.961 14:10:15 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.961 14:10:15 -- nvmf/common.sh@7 -- # uname -s 00:28:24.961 14:10:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.961 14:10:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.961 14:10:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.961 14:10:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.961 14:10:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:24.961 14:10:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:24.961 14:10:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.961 14:10:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:24.961 14:10:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.961 14:10:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:24.961 14:10:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:24.961 14:10:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:24.961 14:10:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.961 14:10:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:24.961 14:10:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:24.961 14:10:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.961 14:10:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.961 14:10:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.961 14:10:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.961 14:10:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.961 14:10:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.961 14:10:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.961 14:10:15 -- paths/export.sh@5 -- # export PATH 00:28:24.961 14:10:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.961 14:10:15 -- nvmf/common.sh@46 -- # : 0 00:28:24.961 14:10:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:24.961 14:10:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:24.961 14:10:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:24.961 14:10:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.961 14:10:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.961 14:10:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:24.961 14:10:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:24.961 14:10:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:24.961 14:10:15 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:24.961 14:10:15 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:24.961 14:10:15 -- host/digest.sh@16 -- # runtime=2 00:28:24.961 14:10:15 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:28:24.961 14:10:15 -- host/digest.sh@132 -- # nvmftestinit 00:28:24.961 14:10:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:24.961 14:10:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:24.961 14:10:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:24.961 14:10:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:24.961 14:10:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:24.961 14:10:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.961 14:10:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:24.961 14:10:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.961 14:10:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:24.961 14:10:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:24.961 14:10:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:24.961 14:10:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.241 14:10:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:30.241 14:10:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:30.241 14:10:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:30.241 14:10:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:30.241 14:10:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:30.241 14:10:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:30.241 14:10:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:30.241 14:10:20 -- nvmf/common.sh@294 -- # net_devs=() 00:28:30.241 14:10:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:30.241 14:10:20 -- nvmf/common.sh@295 -- # e810=() 00:28:30.241 14:10:20 -- nvmf/common.sh@295 -- # local -ga e810 00:28:30.241 14:10:20 -- nvmf/common.sh@296 -- # x722=() 00:28:30.241 14:10:20 -- nvmf/common.sh@296 -- # local -ga x722 00:28:30.241 14:10:20 -- nvmf/common.sh@297 -- # mlx=() 00:28:30.241 14:10:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:30.241 14:10:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:30.241 14:10:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:30.241 14:10:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:30.241 14:10:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:30.241 14:10:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:30.241 14:10:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:30.241 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:30.241 14:10:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:30.241 14:10:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:30.241 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:30.241 14:10:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:30.241 14:10:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:30.241 14:10:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.241 14:10:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:30.241 14:10:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.241 14:10:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:30.241 Found net devices under 0000:86:00.0: cvl_0_0 00:28:30.241 14:10:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.241 14:10:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:30.241 14:10:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:30.241 14:10:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:30.241 14:10:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:30.241 14:10:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:30.241 Found net devices under 0000:86:00.1: cvl_0_1 00:28:30.241 14:10:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:30.241 14:10:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:30.241 14:10:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:30.241 14:10:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:30.241 14:10:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:30.241 14:10:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.241 14:10:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.241 14:10:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:30.241 14:10:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:30.241 14:10:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:30.241 14:10:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:30.241 14:10:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:30.241 14:10:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:30.241 14:10:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.241 14:10:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:30.241 14:10:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:30.241 14:10:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:30.241 14:10:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:30.241 14:10:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:30.241 14:10:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:30.241 14:10:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:30.241 14:10:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:30.241 14:10:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:30.241 14:10:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:30.241 14:10:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:30.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:28:30.241 00:28:30.241 --- 10.0.0.2 ping statistics --- 00:28:30.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.241 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:28:30.241 14:10:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:30.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.340 ms 00:28:30.241 00:28:30.241 --- 10.0.0.1 ping statistics --- 00:28:30.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.241 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:28:30.241 14:10:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.241 14:10:21 -- nvmf/common.sh@410 -- # return 0 00:28:30.241 14:10:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:30.241 14:10:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.241 14:10:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:30.241 14:10:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:30.241 14:10:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.241 14:10:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:30.241 14:10:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:30.241 14:10:21 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:30.241 14:10:21 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:28:30.241 14:10:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:30.241 14:10:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:30.241 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.241 ************************************ 00:28:30.241 START TEST nvmf_digest_clean 00:28:30.241 ************************************ 00:28:30.241 14:10:21 -- common/autotest_common.sh@1104 -- # run_digest 00:28:30.241 14:10:21 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:28:30.241 14:10:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:30.241 14:10:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:30.241 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.241 14:10:21 -- nvmf/common.sh@469 -- # nvmfpid=3420368 00:28:30.241 14:10:21 -- nvmf/common.sh@470 -- # waitforlisten 3420368 00:28:30.241 14:10:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:30.241 14:10:21 -- common/autotest_common.sh@819 -- # '[' -z 3420368 ']' 00:28:30.241 14:10:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.241 14:10:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:30.241 14:10:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.241 14:10:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:30.241 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:28:30.241 [2024-07-23 14:10:21.123260] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:30.241 [2024-07-23 14:10:21.123303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.241 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.241 [2024-07-23 14:10:21.180302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.500 [2024-07-23 14:10:21.258142] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:30.500 [2024-07-23 14:10:21.258247] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.500 [2024-07-23 14:10:21.258255] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.500 [2024-07-23 14:10:21.258262] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.500 [2024-07-23 14:10:21.258277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.069 14:10:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:31.069 14:10:21 -- common/autotest_common.sh@852 -- # return 0 00:28:31.069 14:10:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:31.069 14:10:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:31.069 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:28:31.069 14:10:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.069 14:10:21 -- host/digest.sh@120 -- # common_target_config 00:28:31.069 14:10:21 -- host/digest.sh@43 -- # rpc_cmd 00:28:31.069 14:10:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.069 14:10:21 -- common/autotest_common.sh@10 -- # set +x 00:28:31.069 null0 00:28:31.069 [2024-07-23 14:10:22.045851] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.069 [2024-07-23 14:10:22.070023] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.069 14:10:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.069 14:10:22 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:28:31.069 14:10:22 -- host/digest.sh@77 -- # local rw bs qd 00:28:31.070 14:10:22 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:31.070 14:10:22 -- host/digest.sh@80 -- # rw=randread 00:28:31.070 14:10:22 -- host/digest.sh@80 -- # bs=4096 00:28:31.070 14:10:22 -- host/digest.sh@80 -- # qd=128 00:28:31.070 14:10:22 -- host/digest.sh@82 -- # bperfpid=3420617 00:28:31.070 14:10:22 -- host/digest.sh@83 -- # waitforlisten 3420617 /var/tmp/bperf.sock 00:28:31.070 14:10:22 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:31.070 14:10:22 -- common/autotest_common.sh@819 -- # '[' -z 3420617 ']' 00:28:31.070 14:10:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.070 14:10:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:31.070 14:10:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.070 14:10:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:31.070 14:10:22 -- common/autotest_common.sh@10 -- # set +x 00:28:31.329 [2024-07-23 14:10:22.118549] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:31.329 [2024-07-23 14:10:22.118589] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420617 ] 00:28:31.329 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.329 [2024-07-23 14:10:22.170232] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.329 [2024-07-23 14:10:22.241561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.898 14:10:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:31.898 14:10:22 -- common/autotest_common.sh@852 -- # return 0 00:28:31.898 14:10:22 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:31.898 14:10:22 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:31.898 14:10:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:32.156 14:10:23 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.156 14:10:23 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.725 nvme0n1 00:28:32.725 14:10:23 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:32.725 14:10:23 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:32.725 Running I/O for 2 seconds... 00:28:34.629 00:28:34.629 Latency(us) 00:28:34.629 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.629 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:34.629 nvme0n1 : 2.00 28637.86 111.87 0.00 0.00 4465.05 2137.04 24732.72 00:28:34.629 =================================================================================================================== 00:28:34.629 Total : 28637.86 111.87 0.00 0.00 4465.05 2137.04 24732.72 00:28:34.629 0 00:28:34.629 14:10:25 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:34.629 14:10:25 -- host/digest.sh@92 -- # get_accel_stats 00:28:34.629 14:10:25 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:34.629 14:10:25 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:34.629 | select(.opcode=="crc32c") 00:28:34.629 | "\(.module_name) \(.executed)"' 00:28:34.629 14:10:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:34.890 14:10:25 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:34.890 14:10:25 -- host/digest.sh@93 -- # exp_module=software 00:28:34.890 14:10:25 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:34.890 14:10:25 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:34.890 14:10:25 -- host/digest.sh@97 -- # killprocess 3420617 00:28:34.890 14:10:25 -- common/autotest_common.sh@926 -- # '[' -z 3420617 ']' 00:28:34.890 14:10:25 -- common/autotest_common.sh@930 -- # kill -0 3420617 00:28:34.890 14:10:25 -- common/autotest_common.sh@931 -- # uname 00:28:34.890 14:10:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:34.890 14:10:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3420617 00:28:34.890 14:10:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:34.890 14:10:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:34.890 14:10:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3420617' 00:28:34.890 killing process with pid 3420617 00:28:34.890 14:10:25 -- common/autotest_common.sh@945 -- # kill 3420617 00:28:34.890 Received shutdown signal, test time was about 2.000000 seconds 00:28:34.890 00:28:34.890 Latency(us) 00:28:34.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:34.890 =================================================================================================================== 00:28:34.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:34.890 14:10:25 -- common/autotest_common.sh@950 -- # wait 3420617 00:28:35.150 14:10:26 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:28:35.150 14:10:26 -- host/digest.sh@77 -- # local rw bs qd 00:28:35.150 14:10:26 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.150 14:10:26 -- host/digest.sh@80 -- # rw=randread 00:28:35.150 14:10:26 -- host/digest.sh@80 -- # bs=131072 00:28:35.150 14:10:26 -- host/digest.sh@80 -- # qd=16 00:28:35.150 14:10:26 -- host/digest.sh@82 -- # bperfpid=3421327 00:28:35.150 14:10:26 -- host/digest.sh@83 -- # waitforlisten 3421327 /var/tmp/bperf.sock 00:28:35.150 14:10:26 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:35.150 14:10:26 -- common/autotest_common.sh@819 -- # '[' -z 3421327 ']' 00:28:35.150 14:10:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.150 14:10:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:35.150 14:10:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.150 14:10:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:35.150 14:10:26 -- common/autotest_common.sh@10 -- # set +x 00:28:35.150 [2024-07-23 14:10:26.070735] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:35.150 [2024-07-23 14:10:26.070783] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421327 ] 00:28:35.150 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.150 Zero copy mechanism will not be used. 00:28:35.150 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.150 [2024-07-23 14:10:26.122071] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.409 [2024-07-23 14:10:26.192651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:35.978 14:10:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:35.978 14:10:26 -- common/autotest_common.sh@852 -- # return 0 00:28:35.978 14:10:26 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:35.978 14:10:26 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:35.978 14:10:26 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:36.238 14:10:27 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.238 14:10:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.497 nvme0n1 00:28:36.497 14:10:27 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:36.497 14:10:27 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.756 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.756 Zero copy mechanism will not be used. 00:28:36.756 Running I/O for 2 seconds... 00:28:38.663 00:28:38.663 Latency(us) 00:28:38.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.663 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:38.663 nvme0n1 : 2.00 2508.50 313.56 0.00 0.00 6375.73 4929.45 19717.79 00:28:38.663 =================================================================================================================== 00:28:38.663 Total : 2508.50 313.56 0.00 0.00 6375.73 4929.45 19717.79 00:28:38.663 0 00:28:38.663 14:10:29 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:38.663 14:10:29 -- host/digest.sh@92 -- # get_accel_stats 00:28:38.663 14:10:29 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:38.663 14:10:29 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:38.663 | select(.opcode=="crc32c") 00:28:38.663 | "\(.module_name) \(.executed)"' 00:28:38.663 14:10:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:38.922 14:10:29 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:38.922 14:10:29 -- host/digest.sh@93 -- # exp_module=software 00:28:38.922 14:10:29 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:38.922 14:10:29 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:38.922 14:10:29 -- host/digest.sh@97 -- # killprocess 3421327 00:28:38.922 14:10:29 -- common/autotest_common.sh@926 -- # '[' -z 3421327 ']' 00:28:38.922 14:10:29 -- common/autotest_common.sh@930 -- # kill -0 3421327 00:28:38.922 14:10:29 -- common/autotest_common.sh@931 -- # uname 00:28:38.922 14:10:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:38.922 14:10:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3421327 00:28:38.922 14:10:29 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:38.922 14:10:29 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:38.922 14:10:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3421327' 00:28:38.922 killing process with pid 3421327 00:28:38.922 14:10:29 -- common/autotest_common.sh@945 -- # kill 3421327 00:28:38.922 Received shutdown signal, test time was about 2.000000 seconds 00:28:38.922 00:28:38.923 Latency(us) 00:28:38.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.923 =================================================================================================================== 00:28:38.923 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:38.923 14:10:29 -- common/autotest_common.sh@950 -- # wait 3421327 00:28:39.182 14:10:30 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:28:39.182 14:10:30 -- host/digest.sh@77 -- # local rw bs qd 00:28:39.182 14:10:30 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:39.182 14:10:30 -- host/digest.sh@80 -- # rw=randwrite 00:28:39.182 14:10:30 -- host/digest.sh@80 -- # bs=4096 00:28:39.182 14:10:30 -- host/digest.sh@80 -- # qd=128 00:28:39.182 14:10:30 -- host/digest.sh@82 -- # bperfpid=3421993 00:28:39.182 14:10:30 -- host/digest.sh@83 -- # waitforlisten 3421993 /var/tmp/bperf.sock 00:28:39.182 14:10:30 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:39.182 14:10:30 -- common/autotest_common.sh@819 -- # '[' -z 3421993 ']' 00:28:39.182 14:10:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.182 14:10:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:39.182 14:10:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.182 14:10:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:39.182 14:10:30 -- common/autotest_common.sh@10 -- # set +x 00:28:39.182 [2024-07-23 14:10:30.078661] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:39.182 [2024-07-23 14:10:30.078711] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3421993 ] 00:28:39.182 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.182 [2024-07-23 14:10:30.132172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.441 [2024-07-23 14:10:30.205099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.010 14:10:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:40.010 14:10:30 -- common/autotest_common.sh@852 -- # return 0 00:28:40.010 14:10:30 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:40.010 14:10:30 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:40.010 14:10:30 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:40.269 14:10:31 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.269 14:10:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.529 nvme0n1 00:28:40.529 14:10:31 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:40.529 14:10:31 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.789 Running I/O for 2 seconds... 00:28:42.696 00:28:42.696 Latency(us) 00:28:42.696 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.696 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.696 nvme0n1 : 2.00 27778.10 108.51 0.00 0.00 4600.52 2194.03 24048.86 00:28:42.696 =================================================================================================================== 00:28:42.696 Total : 27778.10 108.51 0.00 0.00 4600.52 2194.03 24048.86 00:28:42.696 0 00:28:42.696 14:10:33 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:42.696 14:10:33 -- host/digest.sh@92 -- # get_accel_stats 00:28:42.696 14:10:33 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:42.696 14:10:33 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:42.696 | select(.opcode=="crc32c") 00:28:42.696 | "\(.module_name) \(.executed)"' 00:28:42.696 14:10:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:42.990 14:10:33 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:42.990 14:10:33 -- host/digest.sh@93 -- # exp_module=software 00:28:42.990 14:10:33 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:42.990 14:10:33 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:42.990 14:10:33 -- host/digest.sh@97 -- # killprocess 3421993 00:28:42.990 14:10:33 -- common/autotest_common.sh@926 -- # '[' -z 3421993 ']' 00:28:42.990 14:10:33 -- common/autotest_common.sh@930 -- # kill -0 3421993 00:28:42.990 14:10:33 -- common/autotest_common.sh@931 -- # uname 00:28:42.990 14:10:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:42.990 14:10:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3421993 00:28:42.990 14:10:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:42.990 14:10:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:42.990 14:10:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3421993' 00:28:42.990 killing process with pid 3421993 00:28:42.990 14:10:33 -- common/autotest_common.sh@945 -- # kill 3421993 00:28:42.990 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.990 00:28:42.991 Latency(us) 00:28:42.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.991 =================================================================================================================== 00:28:42.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.991 14:10:33 -- common/autotest_common.sh@950 -- # wait 3421993 00:28:43.250 14:10:34 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:28:43.250 14:10:34 -- host/digest.sh@77 -- # local rw bs qd 00:28:43.250 14:10:34 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:43.250 14:10:34 -- host/digest.sh@80 -- # rw=randwrite 00:28:43.250 14:10:34 -- host/digest.sh@80 -- # bs=131072 00:28:43.250 14:10:34 -- host/digest.sh@80 -- # qd=16 00:28:43.250 14:10:34 -- host/digest.sh@82 -- # bperfpid=3422593 00:28:43.250 14:10:34 -- host/digest.sh@83 -- # waitforlisten 3422593 /var/tmp/bperf.sock 00:28:43.250 14:10:34 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:43.250 14:10:34 -- common/autotest_common.sh@819 -- # '[' -z 3422593 ']' 00:28:43.250 14:10:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.250 14:10:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:43.250 14:10:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.250 14:10:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:43.250 14:10:34 -- common/autotest_common.sh@10 -- # set +x 00:28:43.250 [2024-07-23 14:10:34.091842] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:43.250 [2024-07-23 14:10:34.091887] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422593 ] 00:28:43.250 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.250 Zero copy mechanism will not be used. 00:28:43.250 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.250 [2024-07-23 14:10:34.143893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.250 [2024-07-23 14:10:34.221653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.188 14:10:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:44.188 14:10:34 -- common/autotest_common.sh@852 -- # return 0 00:28:44.188 14:10:34 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:28:44.188 14:10:34 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:28:44.188 14:10:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:44.188 14:10:35 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.188 14:10:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.758 nvme0n1 00:28:44.758 14:10:35 -- host/digest.sh@91 -- # bperf_py perform_tests 00:28:44.758 14:10:35 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.758 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.758 Zero copy mechanism will not be used. 00:28:44.758 Running I/O for 2 seconds... 00:28:46.666 00:28:46.666 Latency(us) 00:28:46.666 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.666 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:46.666 nvme0n1 : 2.01 1819.74 227.47 0.00 0.00 8770.49 4559.03 21085.50 00:28:46.666 =================================================================================================================== 00:28:46.666 Total : 1819.74 227.47 0.00 0.00 8770.49 4559.03 21085.50 00:28:46.666 0 00:28:46.666 14:10:37 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:28:46.666 14:10:37 -- host/digest.sh@92 -- # get_accel_stats 00:28:46.666 14:10:37 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:46.666 14:10:37 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:46.666 | select(.opcode=="crc32c") 00:28:46.666 | "\(.module_name) \(.executed)"' 00:28:46.666 14:10:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:46.926 14:10:37 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:28:46.926 14:10:37 -- host/digest.sh@93 -- # exp_module=software 00:28:46.926 14:10:37 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:28:46.926 14:10:37 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:46.926 14:10:37 -- host/digest.sh@97 -- # killprocess 3422593 00:28:46.926 14:10:37 -- common/autotest_common.sh@926 -- # '[' -z 3422593 ']' 00:28:46.926 14:10:37 -- common/autotest_common.sh@930 -- # kill -0 3422593 00:28:46.926 14:10:37 -- common/autotest_common.sh@931 -- # uname 00:28:46.926 14:10:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:46.926 14:10:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3422593 00:28:46.926 14:10:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:46.926 14:10:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:46.926 14:10:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3422593' 00:28:46.926 killing process with pid 3422593 00:28:46.926 14:10:37 -- common/autotest_common.sh@945 -- # kill 3422593 00:28:46.926 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.926 00:28:46.926 Latency(us) 00:28:46.926 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.926 =================================================================================================================== 00:28:46.926 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.926 14:10:37 -- common/autotest_common.sh@950 -- # wait 3422593 00:28:47.186 14:10:38 -- host/digest.sh@126 -- # killprocess 3420368 00:28:47.186 14:10:38 -- common/autotest_common.sh@926 -- # '[' -z 3420368 ']' 00:28:47.186 14:10:38 -- common/autotest_common.sh@930 -- # kill -0 3420368 00:28:47.186 14:10:38 -- common/autotest_common.sh@931 -- # uname 00:28:47.186 14:10:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:47.186 14:10:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3420368 00:28:47.186 14:10:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:47.186 14:10:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:47.186 14:10:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3420368' 00:28:47.186 killing process with pid 3420368 00:28:47.186 14:10:38 -- common/autotest_common.sh@945 -- # kill 3420368 00:28:47.186 14:10:38 -- common/autotest_common.sh@950 -- # wait 3420368 00:28:47.446 00:28:47.446 real 0m17.249s 00:28:47.446 user 0m33.714s 00:28:47.446 sys 0m3.695s 00:28:47.446 14:10:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:47.446 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:47.446 ************************************ 00:28:47.446 END TEST nvmf_digest_clean 00:28:47.446 ************************************ 00:28:47.446 14:10:38 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:28:47.446 14:10:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:47.446 14:10:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:47.446 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:47.446 ************************************ 00:28:47.446 START TEST nvmf_digest_error 00:28:47.446 ************************************ 00:28:47.446 14:10:38 -- common/autotest_common.sh@1104 -- # run_digest_error 00:28:47.446 14:10:38 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:28:47.446 14:10:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:47.446 14:10:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:47.446 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:47.446 14:10:38 -- nvmf/common.sh@469 -- # nvmfpid=3423298 00:28:47.446 14:10:38 -- nvmf/common.sh@470 -- # waitforlisten 3423298 00:28:47.446 14:10:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:47.446 14:10:38 -- common/autotest_common.sh@819 -- # '[' -z 3423298 ']' 00:28:47.446 14:10:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.446 14:10:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:47.446 14:10:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.446 14:10:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:47.446 14:10:38 -- common/autotest_common.sh@10 -- # set +x 00:28:47.446 [2024-07-23 14:10:38.421729] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:47.446 [2024-07-23 14:10:38.421775] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.446 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.705 [2024-07-23 14:10:38.478169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.705 [2024-07-23 14:10:38.554555] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:47.705 [2024-07-23 14:10:38.554663] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.705 [2024-07-23 14:10:38.554670] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.705 [2024-07-23 14:10:38.554677] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.705 [2024-07-23 14:10:38.554696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.274 14:10:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:48.274 14:10:39 -- common/autotest_common.sh@852 -- # return 0 00:28:48.274 14:10:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:48.274 14:10:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:48.274 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:48.274 14:10:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.274 14:10:39 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:48.274 14:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.274 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:48.274 [2024-07-23 14:10:39.244698] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:48.274 14:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.274 14:10:39 -- host/digest.sh@104 -- # common_target_config 00:28:48.274 14:10:39 -- host/digest.sh@43 -- # rpc_cmd 00:28:48.274 14:10:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.274 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:48.534 null0 00:28:48.534 [2024-07-23 14:10:39.338223] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.534 [2024-07-23 14:10:39.362402] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.534 14:10:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.534 14:10:39 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:28:48.534 14:10:39 -- host/digest.sh@54 -- # local rw bs qd 00:28:48.534 14:10:39 -- host/digest.sh@56 -- # rw=randread 00:28:48.534 14:10:39 -- host/digest.sh@56 -- # bs=4096 00:28:48.534 14:10:39 -- host/digest.sh@56 -- # qd=128 00:28:48.534 14:10:39 -- host/digest.sh@58 -- # bperfpid=3423508 00:28:48.534 14:10:39 -- host/digest.sh@60 -- # waitforlisten 3423508 /var/tmp/bperf.sock 00:28:48.534 14:10:39 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:48.534 14:10:39 -- common/autotest_common.sh@819 -- # '[' -z 3423508 ']' 00:28:48.534 14:10:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:48.534 14:10:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:48.534 14:10:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:48.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:48.534 14:10:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:48.534 14:10:39 -- common/autotest_common.sh@10 -- # set +x 00:28:48.534 [2024-07-23 14:10:39.408348] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:48.534 [2024-07-23 14:10:39.408388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423508 ] 00:28:48.534 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.534 [2024-07-23 14:10:39.460686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.534 [2024-07-23 14:10:39.531665] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.472 14:10:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:49.472 14:10:40 -- common/autotest_common.sh@852 -- # return 0 00:28:49.472 14:10:40 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.472 14:10:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:49.472 14:10:40 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:49.472 14:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.472 14:10:40 -- common/autotest_common.sh@10 -- # set +x 00:28:49.472 14:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.472 14:10:40 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.472 14:10:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.731 nvme0n1 00:28:49.991 14:10:40 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:49.991 14:10:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.991 14:10:40 -- common/autotest_common.sh@10 -- # set +x 00:28:49.991 14:10:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.991 14:10:40 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:49.991 14:10:40 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:49.991 Running I/O for 2 seconds... 00:28:49.991 [2024-07-23 14:10:40.862485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.862518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.862529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.873413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.873437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.873446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.881691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.881712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.881720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.890722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.890744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.890752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.899236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.899257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:17634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.899265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.908153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.908173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:12231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.908181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.916741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.916762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.916770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.925787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.925807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.925815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.936510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.936529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:13661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.936537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.947947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.947967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.947974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.958159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.958178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:14337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.958186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.966913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.966932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.966939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.975530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.975549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.975561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.984704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.984723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.984731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:40.993198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:40.993217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:40.993225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:49.991 [2024-07-23 14:10:41.005013] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:49.991 [2024-07-23 14:10:41.005032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:49.991 [2024-07-23 14:10:41.005040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.251 [2024-07-23 14:10:41.015240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.251 [2024-07-23 14:10:41.015260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.251 [2024-07-23 14:10:41.015268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.251 [2024-07-23 14:10:41.023883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.023902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.023909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.031556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.031576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:1839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.031584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.043934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.043954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.043962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.052804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.052823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.052831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.060245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.060268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.060276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.071598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.071617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:4803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.071624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.085420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.085439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.085447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.094328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.094349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.094357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.103890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.103910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.103918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.111771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.111791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.111799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.121650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.121669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.121677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.130943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.130962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:19913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.130970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.144635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.144655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.144663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.154119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.154138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.154146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.162309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.162329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.162337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.171749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.171769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.171777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.180417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.180438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.180445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.189690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.189710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.189718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.198598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.198619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:4894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.198627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.207096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.207115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.207124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.215865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.215885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.215893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.224879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.224899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.224910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.233461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.233480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.233488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.242006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.242026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.242034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.250997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.251017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.251025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.252 [2024-07-23 14:10:41.259428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.252 [2024-07-23 14:10:41.259448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:11002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.252 [2024-07-23 14:10:41.259455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.268181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.268201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.268209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.276989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.277009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.277017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.286031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.286057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.286066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.294314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.294335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.294342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.303012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.303034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.303041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.311808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.311827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.311835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.320407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.320427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.320434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.329349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.329369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.329376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.337687] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.337707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.337714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.346783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.346803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.346811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.355224] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.355244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.355251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.363640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.363659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.363668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.372566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.372586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:14317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.372593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.381509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.381530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.381538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.513 [2024-07-23 14:10:41.389583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.513 [2024-07-23 14:10:41.389603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.513 [2024-07-23 14:10:41.389610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.398984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.399004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.399012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.407283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.407303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.407311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.416038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.416061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:24098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.416085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.424977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.424997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.425005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.433293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.433312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.433320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.441901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.441921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:2840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.441928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.450888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.450907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.450918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.459339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.459359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:16173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.459366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.467719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.467738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.467746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.476833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.476852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.476860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.485526] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.485546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.485554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.493802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.493822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.493830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.502683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.502703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:9399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.502711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.511300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.511320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.511328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.519572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.519593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.514 [2024-07-23 14:10:41.528785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.514 [2024-07-23 14:10:41.528808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.514 [2024-07-23 14:10:41.528816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.537617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.537637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.537644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.546247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.546267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.546275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.555345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.555364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.555372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.563515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.563535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.563542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.572183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.572202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.572210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.580838] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.580857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:17757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.580865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.591734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.591754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.591762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.602126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.602146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.602157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.610291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.610310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.610318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.619521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.619540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.619549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.629764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.629783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.629790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.641806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.641826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.641834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.651275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.651295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.651303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.659341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.659360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.659368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.668336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.668355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.668362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.679073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.679092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.679100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.690783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.690805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:24610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.690813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.699596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.699614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.699621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.708353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.775 [2024-07-23 14:10:41.708372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.775 [2024-07-23 14:10:41.708380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.775 [2024-07-23 14:10:41.716695] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.716714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.716722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.725100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.725124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.725131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.734005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.734024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.734032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.743366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.743386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.743394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.752672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.752691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.752698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.760370] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.760388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.760395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.774079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.774099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.774107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:50.776 [2024-07-23 14:10:41.784075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:50.776 [2024-07-23 14:10:41.784095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:50.776 [2024-07-23 14:10:41.784104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.792574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.792594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:7970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.792603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.801488] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.801507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.801515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.810606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.810625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.810633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.818972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.818992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.819000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.827643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.827662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.827670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.836439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.836458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.836466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.847787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.847806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.847817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.857496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.857515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.857523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.867466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.867485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.867493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.036 [2024-07-23 14:10:41.876519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.036 [2024-07-23 14:10:41.876539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:7652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.036 [2024-07-23 14:10:41.876548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.883977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.883997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.884004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.895713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.895732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.895739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.907984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.908003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20021 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.908010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.915767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.915787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.915795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.925620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.925640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.925648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.935068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.935092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13591 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.935100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.942884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.942904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.942913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.952874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.952897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.952905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.962059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.962080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.962088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.970657] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.970677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.970685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.979048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.979067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.979075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.988233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.988252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.988260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:41.996427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:41.996446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:41.996453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:42.005573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:42.005592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:11552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:42.005600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:42.014100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:42.014119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:42.014128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:42.022495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:42.022515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:42.022522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:42.031588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:42.031607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:42.031614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:42.040072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:42.040090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:42.040097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.037 [2024-07-23 14:10:42.048606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.037 [2024-07-23 14:10:42.048625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.037 [2024-07-23 14:10:42.048632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.057952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.057971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.057979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.066275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.066294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.066302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.074781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.074800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.074808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.083713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.083732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.083746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.092112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.092132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.092140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.100819] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.100838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.100846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.109560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.109580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.109588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.118064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.118085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.118093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.126545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.126566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.126574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.135513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.135532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.135540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.144346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.144366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:7094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.144373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.153260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.153279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.153287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.161595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.161614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:2551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.161621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.170135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.170153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.170161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.179015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.179034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:8954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.298 [2024-07-23 14:10:42.179048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.298 [2024-07-23 14:10:42.187529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.298 [2024-07-23 14:10:42.187548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.187556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.195923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.195942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.195950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.205022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.205041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.205054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.213361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.213381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.222101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.222120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.222128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.230989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.231008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.231019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.239416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.239436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.239443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.247806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.247825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.247833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.256908] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.256928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.256935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.265439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.265458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.265466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.273437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.273456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.273463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.282623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.282642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.291015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.291035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.291048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.299693] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.299712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.299720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.299 [2024-07-23 14:10:42.308654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.299 [2024-07-23 14:10:42.308677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.299 [2024-07-23 14:10:42.308685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.559 [2024-07-23 14:10:42.317435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.317469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.317478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.325962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.325981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.325989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.334790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.334809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.334817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.343383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.343401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.343409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.351779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.351799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.351807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.360724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.360743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.360751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.369363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.369383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.369391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.378012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.378031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:14439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.378039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.386599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.386620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.386627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.395728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.395746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.395754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.404291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.404310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.404317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.412812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.412831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.412839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.421830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.421850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:14515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.421857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.430181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.430200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.430208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.438800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.438819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.438827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.447771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.447790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.447798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.456321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.456341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19324 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.456352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.464727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.464746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:17966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.464753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.473610] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.473629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.473637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.482259] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.482279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.482287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.490913] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.490933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.490941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.500360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.500380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.500388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.508853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.508872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.508879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.517710] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.517730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.517738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.560 [2024-07-23 14:10:42.526116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.560 [2024-07-23 14:10:42.526135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.560 [2024-07-23 14:10:42.526143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.561 [2024-07-23 14:10:42.535060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.561 [2024-07-23 14:10:42.535082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.561 [2024-07-23 14:10:42.535090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.561 [2024-07-23 14:10:42.543658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.561 [2024-07-23 14:10:42.543677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.561 [2024-07-23 14:10:42.543685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.561 [2024-07-23 14:10:42.552112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.561 [2024-07-23 14:10:42.552131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:22607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.561 [2024-07-23 14:10:42.552139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.561 [2024-07-23 14:10:42.561048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.561 [2024-07-23 14:10:42.561067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.561 [2024-07-23 14:10:42.561075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.561 [2024-07-23 14:10:42.569518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.561 [2024-07-23 14:10:42.569539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.561 [2024-07-23 14:10:42.569549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.821 [2024-07-23 14:10:42.578315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.821 [2024-07-23 14:10:42.578335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.821 [2024-07-23 14:10:42.578342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.821 [2024-07-23 14:10:42.587270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.821 [2024-07-23 14:10:42.587290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.821 [2024-07-23 14:10:42.587298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.821 [2024-07-23 14:10:42.595759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.821 [2024-07-23 14:10:42.595780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.821 [2024-07-23 14:10:42.595788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.821 [2024-07-23 14:10:42.604969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.821 [2024-07-23 14:10:42.604991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:10128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.821 [2024-07-23 14:10:42.604999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.821 [2024-07-23 14:10:42.613422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.821 [2024-07-23 14:10:42.613443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.613451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.621972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.621993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.622001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.631050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.631070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.631079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.639427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.639447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.639455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.648099] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.648118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.648126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.657298] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.657317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.657325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.665721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.665741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.665749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.674606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.674625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.674633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.682878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.682898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.682909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.691645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.691665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.691673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.700594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.700615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.700622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.708917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.708937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.708945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.717391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.717411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.717419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.726423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.726443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.726450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.734670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.734690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.734698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.743735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.743755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:23412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.743763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.752230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.752249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.752257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.760775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.760798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.760805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.769726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.769745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.769753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.778266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.778285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.778293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.786607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.786627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.786635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.795634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.795654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.795661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.804228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.804248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:13462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.804256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.812616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.812637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:25429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.812645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.821581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.821600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.821608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:51.822 [2024-07-23 14:10:42.830053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:51.822 [2024-07-23 14:10:42.830073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:51.822 [2024-07-23 14:10:42.830084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.082 [2024-07-23 14:10:42.838592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:52.082 [2024-07-23 14:10:42.838611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.082 [2024-07-23 14:10:42.838619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.082 [2024-07-23 14:10:42.848082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x8ef9c0) 00:28:52.082 [2024-07-23 14:10:42.848102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:52.082 [2024-07-23 14:10:42.848109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:52.082 00:28:52.082 Latency(us) 00:28:52.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.082 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:52.082 nvme0n1 : 2.00 28323.93 110.64 0.00 0.00 4514.88 2194.03 15614.66 00:28:52.082 =================================================================================================================== 00:28:52.082 Total : 28323.93 110.64 0.00 0.00 4514.88 2194.03 15614.66 00:28:52.082 0 00:28:52.082 14:10:42 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:52.082 14:10:42 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:52.082 14:10:42 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:52.082 | .driver_specific 00:28:52.082 | .nvme_error 00:28:52.082 | .status_code 00:28:52.082 | .command_transient_transport_error' 00:28:52.082 14:10:42 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:52.082 14:10:43 -- host/digest.sh@71 -- # (( 222 > 0 )) 00:28:52.082 14:10:43 -- host/digest.sh@73 -- # killprocess 3423508 00:28:52.082 14:10:43 -- common/autotest_common.sh@926 -- # '[' -z 3423508 ']' 00:28:52.083 14:10:43 -- common/autotest_common.sh@930 -- # kill -0 3423508 00:28:52.083 14:10:43 -- common/autotest_common.sh@931 -- # uname 00:28:52.083 14:10:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:52.083 14:10:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3423508 00:28:52.083 14:10:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:52.083 14:10:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:52.083 14:10:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3423508' 00:28:52.083 killing process with pid 3423508 00:28:52.083 14:10:43 -- common/autotest_common.sh@945 -- # kill 3423508 00:28:52.083 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.083 00:28:52.083 Latency(us) 00:28:52.083 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.083 =================================================================================================================== 00:28:52.083 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.083 14:10:43 -- common/autotest_common.sh@950 -- # wait 3423508 00:28:52.342 14:10:43 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:28:52.342 14:10:43 -- host/digest.sh@54 -- # local rw bs qd 00:28:52.342 14:10:43 -- host/digest.sh@56 -- # rw=randread 00:28:52.342 14:10:43 -- host/digest.sh@56 -- # bs=131072 00:28:52.342 14:10:43 -- host/digest.sh@56 -- # qd=16 00:28:52.342 14:10:43 -- host/digest.sh@58 -- # bperfpid=3424220 00:28:52.342 14:10:43 -- host/digest.sh@60 -- # waitforlisten 3424220 /var/tmp/bperf.sock 00:28:52.342 14:10:43 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:52.342 14:10:43 -- common/autotest_common.sh@819 -- # '[' -z 3424220 ']' 00:28:52.342 14:10:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.342 14:10:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:52.342 14:10:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.342 14:10:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:52.342 14:10:43 -- common/autotest_common.sh@10 -- # set +x 00:28:52.342 [2024-07-23 14:10:43.345492] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:52.342 [2024-07-23 14:10:43.345543] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424220 ] 00:28:52.342 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.342 Zero copy mechanism will not be used. 00:28:52.600 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.600 [2024-07-23 14:10:43.398316] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.600 [2024-07-23 14:10:43.475279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.169 14:10:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:53.169 14:10:44 -- common/autotest_common.sh@852 -- # return 0 00:28:53.169 14:10:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.169 14:10:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.428 14:10:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:53.428 14:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.428 14:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:53.428 14:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.428 14:10:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.428 14:10:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.688 nvme0n1 00:28:53.688 14:10:44 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:53.688 14:10:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.688 14:10:44 -- common/autotest_common.sh@10 -- # set +x 00:28:53.688 14:10:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.688 14:10:44 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:53.688 14:10:44 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.948 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:53.948 Zero copy mechanism will not be used. 00:28:53.948 Running I/O for 2 seconds... 00:28:53.948 [2024-07-23 14:10:44.791161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.791196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.791207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.805718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.805743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.805753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.818943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.818970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.818979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.833387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.833408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.833416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.845072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.845092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.845100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.857766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.857787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.857795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.871384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.871405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.871414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.883601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.883621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.883629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.897902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.897923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.897931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.909671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.909691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.948 [2024-07-23 14:10:44.909699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:53.948 [2024-07-23 14:10:44.920940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.948 [2024-07-23 14:10:44.920960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.949 [2024-07-23 14:10:44.920968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:53.949 [2024-07-23 14:10:44.932509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.949 [2024-07-23 14:10:44.932528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.949 [2024-07-23 14:10:44.932535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.949 [2024-07-23 14:10:44.943811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.949 [2024-07-23 14:10:44.943830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.949 [2024-07-23 14:10:44.943838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:53.949 [2024-07-23 14:10:44.955008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:53.949 [2024-07-23 14:10:44.955027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.949 [2024-07-23 14:10:44.955035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:44.966598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:44.966617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:44.966624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:44.977985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:44.978004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:44.978012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:44.989421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:44.989440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:44.989447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.000746] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.000765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.000773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.011960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.011980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.011987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.023215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.023234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.023245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.034365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.034385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.034392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.045642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.045661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.045668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.056889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.056910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.056920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.068312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.068331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.068339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.079464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.079484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.079492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.090871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.090890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.090898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.102077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.102096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.102104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.113541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.113560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.113567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.124814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.124837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.209 [2024-07-23 14:10:45.124845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.209 [2024-07-23 14:10:45.136155] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.209 [2024-07-23 14:10:45.136173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.136181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.147311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.147330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.147337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.158577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.158597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.158604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.169998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.170018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.170026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.181489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.181510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.181517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.192798] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.192817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.192825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.204098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.204117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.204124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.210 [2024-07-23 14:10:45.215329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.210 [2024-07-23 14:10:45.215348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.210 [2024-07-23 14:10:45.215356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.226625] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.226645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.226653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.237963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.237982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.237990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.249346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.249366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.249373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.260692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.260711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.260719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.271932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.271951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.271959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.283547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.283567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.283574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.295622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.295641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.295649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.311305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.311324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.311331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.330443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.330466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.330474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.344559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.344578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.344586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.359523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.359544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.359553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.373810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.373830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.373838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.389862] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.389883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.389892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.470 [2024-07-23 14:10:45.405518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.470 [2024-07-23 14:10:45.405539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.470 [2024-07-23 14:10:45.405547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.471 [2024-07-23 14:10:45.423537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.471 [2024-07-23 14:10:45.423557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.471 [2024-07-23 14:10:45.423565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.471 [2024-07-23 14:10:45.440410] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.471 [2024-07-23 14:10:45.440431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.471 [2024-07-23 14:10:45.440439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.471 [2024-07-23 14:10:45.454062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.471 [2024-07-23 14:10:45.454083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.471 [2024-07-23 14:10:45.454090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.471 [2024-07-23 14:10:45.465598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.471 [2024-07-23 14:10:45.465619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.471 [2024-07-23 14:10:45.465627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.471 [2024-07-23 14:10:45.476868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.471 [2024-07-23 14:10:45.476888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.471 [2024-07-23 14:10:45.476896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.488239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.488259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.488267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.500005] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.500026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.500034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.512770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.512791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.512798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.527260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.527280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.527288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.542563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.542585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.542593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.556607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.556628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.556636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.570322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.570342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.570354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.583887] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.583907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.583915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.598742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.598763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.598771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.613411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.613431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.626118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.626139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.626147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.639870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.639891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.639899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.654334] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.654355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.654363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.666700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.666721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.666729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.679321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.679341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.679349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.692579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.692604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.692612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.705837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.705858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.705866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.718800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.718820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.731 [2024-07-23 14:10:45.718827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.731 [2024-07-23 14:10:45.732643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.731 [2024-07-23 14:10:45.732666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.732 [2024-07-23 14:10:45.732673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.732 [2024-07-23 14:10:45.747307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.732 [2024-07-23 14:10:45.747329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.732 [2024-07-23 14:10:45.747337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.763106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.763127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.763135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.777239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.777259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.777267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.790903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.790924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.790931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.805308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.805329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.805340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.818241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.818261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.818269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.831183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.831202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.831211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.844057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.844077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.844085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.856086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.856113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.856122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.869291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.869312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.869320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.881930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.881950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.881958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.895791] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.895811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.895819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.911319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.911339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.911347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.923890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.923914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.923922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.936324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.936343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.936351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.948150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.948170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.948177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.960144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.960164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.960171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.972309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.972328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.972335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.985997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.986017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.986024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:54.992 [2024-07-23 14:10:45.999499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:54.992 [2024-07-23 14:10:45.999519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-07-23 14:10:45.999527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.013310] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.013330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.013337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.032663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.032682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.032689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.049468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.049487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.049495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.063479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.063499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.063507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.082611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.082631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.082639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.098897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.098916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.098923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.111639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.111658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.111666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.252 [2024-07-23 14:10:46.131059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.252 [2024-07-23 14:10:46.131079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-07-23 14:10:46.131087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.144575] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.144595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.144603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.155828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.155848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.155855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.167134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.167153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.167164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.178661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.178680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.178688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.190272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.190293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.190301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.201522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.201541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.201548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.213029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.213054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.213062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.224648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.224667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.224674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.236994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.237013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.237021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.253 [2024-07-23 14:10:46.254187] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.253 [2024-07-23 14:10:46.254206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.253 [2024-07-23 14:10:46.254213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.512 [2024-07-23 14:10:46.274517] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.512 [2024-07-23 14:10:46.274536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.512 [2024-07-23 14:10:46.274544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.512 [2024-07-23 14:10:46.289564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.512 [2024-07-23 14:10:46.289588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.512 [2024-07-23 14:10:46.289596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.512 [2024-07-23 14:10:46.302780] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.512 [2024-07-23 14:10:46.302800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.512 [2024-07-23 14:10:46.302808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.512 [2024-07-23 14:10:46.315421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.512 [2024-07-23 14:10:46.315442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.512 [2024-07-23 14:10:46.315450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.512 [2024-07-23 14:10:46.327635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.512 [2024-07-23 14:10:46.327654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.512 [2024-07-23 14:10:46.327662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.512 [2024-07-23 14:10:46.340487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.340508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.340516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.351894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.351912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.351920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.363278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.363297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.363304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.374596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.374615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.374623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.386098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.386117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.386128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.397332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.397351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.397359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.408902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.408921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.408928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.420290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.420309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.420317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.431541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.431560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.431567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.442947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.442965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.442973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.454200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.454219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.454227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.465479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.465498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.465505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.476857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.476877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.476884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.488222] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.488246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.488254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.499472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.499490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.499498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.510933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.510952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.510959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.513 [2024-07-23 14:10:46.522320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.513 [2024-07-23 14:10:46.522339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.513 [2024-07-23 14:10:46.522346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.533578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.533597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.533605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.544990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.545009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.545016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.556308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.556327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.556335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.567666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.567684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.567692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.578882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.578901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.578908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.590320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.590339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.590346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.601559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.601578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.601585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.612907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.612926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.612934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.624239] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.624258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.624265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.635507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.635526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.635533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.646893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.646912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.646920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.658364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.658382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.658390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.669696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.669715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.669723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.681019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.681038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.681055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.692399] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.692418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.692425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.703813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.703831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.703838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.715062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.715081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.715088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.726297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.726316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.726323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.737561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.737580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.737587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.748919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.748938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.748945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.760116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.760135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.760143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.773 [2024-07-23 14:10:46.771414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x18ff820) 00:28:55.773 [2024-07-23 14:10:46.771434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.773 [2024-07-23 14:10:46.771441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:55.773 00:28:55.773 Latency(us) 00:28:55.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:55.773 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:55.773 nvme0n1 : 2.01 2427.43 303.43 0.00 0.00 6586.55 5499.33 21541.40 00:28:55.773 =================================================================================================================== 00:28:55.773 Total : 2427.43 303.43 0.00 0.00 6586.55 5499.33 21541.40 00:28:55.773 0 00:28:56.032 14:10:46 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:56.032 14:10:46 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:56.032 14:10:46 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:56.032 | .driver_specific 00:28:56.032 | .nvme_error 00:28:56.032 | .status_code 00:28:56.032 | .command_transient_transport_error' 00:28:56.032 14:10:46 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:56.032 14:10:46 -- host/digest.sh@71 -- # (( 157 > 0 )) 00:28:56.032 14:10:46 -- host/digest.sh@73 -- # killprocess 3424220 00:28:56.032 14:10:46 -- common/autotest_common.sh@926 -- # '[' -z 3424220 ']' 00:28:56.032 14:10:46 -- common/autotest_common.sh@930 -- # kill -0 3424220 00:28:56.032 14:10:46 -- common/autotest_common.sh@931 -- # uname 00:28:56.032 14:10:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:56.032 14:10:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3424220 00:28:56.032 14:10:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:56.032 14:10:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:56.032 14:10:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3424220' 00:28:56.032 killing process with pid 3424220 00:28:56.032 14:10:47 -- common/autotest_common.sh@945 -- # kill 3424220 00:28:56.032 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.032 00:28:56.032 Latency(us) 00:28:56.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.032 =================================================================================================================== 00:28:56.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.032 14:10:47 -- common/autotest_common.sh@950 -- # wait 3424220 00:28:56.291 14:10:47 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:28:56.291 14:10:47 -- host/digest.sh@54 -- # local rw bs qd 00:28:56.291 14:10:47 -- host/digest.sh@56 -- # rw=randwrite 00:28:56.291 14:10:47 -- host/digest.sh@56 -- # bs=4096 00:28:56.291 14:10:47 -- host/digest.sh@56 -- # qd=128 00:28:56.291 14:10:47 -- host/digest.sh@58 -- # bperfpid=3424883 00:28:56.291 14:10:47 -- host/digest.sh@60 -- # waitforlisten 3424883 /var/tmp/bperf.sock 00:28:56.291 14:10:47 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:56.291 14:10:47 -- common/autotest_common.sh@819 -- # '[' -z 3424883 ']' 00:28:56.291 14:10:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.291 14:10:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:56.291 14:10:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.291 14:10:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:56.291 14:10:47 -- common/autotest_common.sh@10 -- # set +x 00:28:56.291 [2024-07-23 14:10:47.260311] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:28:56.291 [2024-07-23 14:10:47.260357] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3424883 ] 00:28:56.291 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.551 [2024-07-23 14:10:47.313240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.551 [2024-07-23 14:10:47.390224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.119 14:10:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:57.119 14:10:48 -- common/autotest_common.sh@852 -- # return 0 00:28:57.119 14:10:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.119 14:10:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.378 14:10:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:57.378 14:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.378 14:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:57.378 14:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.378 14:10:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.378 14:10:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.637 nvme0n1 00:28:57.897 14:10:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:57.897 14:10:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.897 14:10:48 -- common/autotest_common.sh@10 -- # set +x 00:28:57.897 14:10:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:57.897 14:10:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:57.897 14:10:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.897 Running I/O for 2 seconds... 00:28:57.897 [2024-07-23 14:10:48.794876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.795670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.795697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.804436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.804633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.804653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.813803] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.814014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.814033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.823114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.823337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.832409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.832628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.832646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.841727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.841951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.841969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.850971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.851194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.851212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.860192] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.860418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.860435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.869447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.869668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.869686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.878674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.878891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.878909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.887914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.888131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.888149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.897160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.897386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.897404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:57.897 [2024-07-23 14:10:48.906548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:57.897 [2024-07-23 14:10:48.906778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:57.897 [2024-07-23 14:10:48.906796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.916125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.916344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.916362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.925656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.925879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:24383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.925897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.934974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.935206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.935225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.944316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.944537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.944555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.953531] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.953748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.953766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.962765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.962982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.963000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.972017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.972240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.972259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.981308] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.981529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.981547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.990547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.990763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.990781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:48.999750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:48.999975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:48.999996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:49.008981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.157 [2024-07-23 14:10:49.009199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:7692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.157 [2024-07-23 14:10:49.009217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.157 [2024-07-23 14:10:49.018245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.018472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.018489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.027470] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.027685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.027703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.036708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.036924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.036941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.045945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.046161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.046178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.055347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.055565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.055583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.064599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.064814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.064833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.073804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.074032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.074054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.083033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.083262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.083280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.092295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.092517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.092534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.101483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.101702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.101719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.110703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.110925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.110942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.119956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.120171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.120189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.129203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.129423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:15802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.129440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.138453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.138670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.138687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.147686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.147905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:13515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.147922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.156884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.157104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.157122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.158 [2024-07-23 14:10:49.166165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.158 [2024-07-23 14:10:49.166381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.158 [2024-07-23 14:10:49.166399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.175617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.175850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.175868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.184989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.185211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.185229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.194212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.194432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.194450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.203449] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.203667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.203684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.212684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.212900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.212917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.221935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.222159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.222177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.231175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.231395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.231413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.240425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.240649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.240670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.249634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.249851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:8768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.249869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.258853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.259076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.259095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.268108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.268323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.268342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.277324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.277543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.277560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.286562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.286781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.286799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.418 [2024-07-23 14:10:49.295784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.418 [2024-07-23 14:10:49.296001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.418 [2024-07-23 14:10:49.296019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.305133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.305352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.305370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.314433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.314652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.314670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.323646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.323870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.323888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.332896] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.333117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.333134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.342173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.342391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.342408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.351377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.351593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.351611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.360627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.360843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.360860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.369859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.370081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.370099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.379076] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.379291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.379309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.388327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.388548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:26 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.388566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.397553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.397771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.397789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.406819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.407037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.407059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.416406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.416641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.416659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.419 [2024-07-23 14:10:49.425708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.419 [2024-07-23 14:10:49.425928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.419 [2024-07-23 14:10:49.425945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.678 [2024-07-23 14:10:49.435135] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.678 [2024-07-23 14:10:49.435354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.678 [2024-07-23 14:10:49.435372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.678 [2024-07-23 14:10:49.444612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.678 [2024-07-23 14:10:49.444830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.678 [2024-07-23 14:10:49.444848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.678 [2024-07-23 14:10:49.453798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.454031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.463063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.463279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.463297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.472289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.472506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.472524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.481540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.481756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.481774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.490789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.491004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.491021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.499989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.500209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.500227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.509220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.509442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.509460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.518424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.518646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.518664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.527661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.527880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.527897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.536885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.537103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.537120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.546125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.546345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.546363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.555474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.555693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.555710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.564753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.564972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.564993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.573943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.574201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.583249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.583477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.583495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.592487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.592703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.592721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.601692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.601908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.601926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.610934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.611152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.611170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.620177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.620394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.620411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.629376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.629727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:4369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.629746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.638622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.638847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.638865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.647786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fdeb0 00:28:58.679 [2024-07-23 14:10:49.648806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.648824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.657318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fef90 00:28:58.679 [2024-07-23 14:10:49.658365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.658383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.666679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fa3a0 00:28:58.679 [2024-07-23 14:10:49.667062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.667081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.675903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fa3a0 00:28:58.679 [2024-07-23 14:10:49.676409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.676426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.685190] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fa3a0 00:28:58.679 [2024-07-23 14:10:49.685443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.679 [2024-07-23 14:10:49.685460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.679 [2024-07-23 14:10:49.694535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fa3a0 00:28:58.680 [2024-07-23 14:10:49.694773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.680 [2024-07-23 14:10:49.694793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.703890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fa3a0 00:28:58.939 [2024-07-23 14:10:49.704136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.704162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.713222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fa3a0 00:28:58.939 [2024-07-23 14:10:49.713451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.713468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.722383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.724420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:8303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.724438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.732729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.734030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:16696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.734052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.742053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.742264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.742282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.751262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.751460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.751479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.760415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.760626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.760645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.769628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.769851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.769870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.778830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.779057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.779091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.788153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.788376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.788395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.797373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.797590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.797607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.806843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.807063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.807085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.816313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.816530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.816548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.825538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.825755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.825773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.834794] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.835015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.835033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.843950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.844166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.844185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.853150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.853364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.853382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.862435] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.862654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.862672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.871726] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.871947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.871966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.939 [2024-07-23 14:10:49.881054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.939 [2024-07-23 14:10:49.881275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.939 [2024-07-23 14:10:49.881293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.890286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.890509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.890526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.899527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.899755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.899773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.908822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.909037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.909059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.918329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.918552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.918571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.927575] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.927807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.927824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.936901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.937126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.937144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.946123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:58.940 [2024-07-23 14:10:49.946345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:58.940 [2024-07-23 14:10:49.946363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:58.940 [2024-07-23 14:10:49.955488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:49.955712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:13604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:49.955729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:49.964920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:49.965141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:11862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:49.965159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:49.974119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:49.974354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:49.974373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:49.983472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:49.983694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:2594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:49.983711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:49.992667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:49.992884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:49.992901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:50.001959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:50.002199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:50.002226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:50.013816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:50.014061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:50.014085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:50.023348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:50.023590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:50.023609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:50.032882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:50.033118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:50.033138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:50.042429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.233 [2024-07-23 14:10:50.042657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.233 [2024-07-23 14:10:50.042677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.233 [2024-07-23 14:10:50.051949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.052175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.052196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.064078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.064316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.064337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.073569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.073783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.073802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.083060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.083297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.083316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.092585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.092809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:15098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.092827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.102096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.102324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.102341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.111537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.111762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.111780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.121089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.121314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.121332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.130563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.130791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.130808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.140086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.140316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.140334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.149550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.149772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.149791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.159057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.159279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.159297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.168586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.168808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.168827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.178033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.178268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.178286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.187538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.187895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.187913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.197058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.197673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.197691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.206554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.206751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.206768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.234 [2024-07-23 14:10:50.216050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.234 [2024-07-23 14:10:50.216397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.234 [2024-07-23 14:10:50.216415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.225574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.496 [2024-07-23 14:10:50.225769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:12803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.225788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.234986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f9b30 00:28:59.496 [2024-07-23 14:10:50.236384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.236419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.245922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.246844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.246863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.255457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.255725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.255743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.264960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.265238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.265257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.274507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.274773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.274791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.283987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.284249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.284267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.293459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.293824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.293842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.302874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.303870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.303891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.312443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.312705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.312723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.321943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.322341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.322359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.331389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.331744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.331762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.340869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.496 [2024-07-23 14:10:50.341188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.341206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.350250] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f5be8 00:28:59.496 [2024-07-23 14:10:50.352670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.352688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.362360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.362650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.362668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.371821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.372061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.372080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.381332] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.381549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:11278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.381567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.390793] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.391011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.391029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.400316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.400538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:10874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.400556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.409742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.410031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.410053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.419221] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.419516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.419534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.428658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fc560 00:28:59.496 [2024-07-23 14:10:50.430703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.430721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:59.496 [2024-07-23 14:10:50.441811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fef90 00:28:59.496 [2024-07-23 14:10:50.442824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.496 [2024-07-23 14:10:50.442843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.451312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.451530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.451548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.460788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.460988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.461006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.470237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.470446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.470465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.479731] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.479946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:14792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.479965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.489203] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.489436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.498637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.498854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.498871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.497 [2024-07-23 14:10:50.508099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f6458 00:28:59.497 [2024-07-23 14:10:50.509052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.497 [2024-07-23 14:10:50.509069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.519036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7538 00:28:59.756 [2024-07-23 14:10:50.520086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.520104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.528132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190fbcf0 00:28:59.756 [2024-07-23 14:10:50.528710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:10415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.528728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.537002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190e9e10 00:28:59.756 [2024-07-23 14:10:50.538390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.538418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.546128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f3a28 00:28:59.756 [2024-07-23 14:10:50.547351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.547369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.555244] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f92c0 00:28:59.756 [2024-07-23 14:10:50.555995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.556019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.564318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190ea680 00:28:59.756 [2024-07-23 14:10:50.565328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.565347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.573374] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0bc0 00:28:59.756 [2024-07-23 14:10:50.574131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.574149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.582454] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f7da8 00:28:59.756 [2024-07-23 14:10:50.583838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.583856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.591548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190e8d30 00:28:59.756 [2024-07-23 14:10:50.592780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.592798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.600549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f57b0 00:28:59.756 [2024-07-23 14:10:50.602301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.602320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.614153] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190eff18 00:28:59.756 [2024-07-23 14:10:50.615533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.615551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.624210] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.624445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:25043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.624462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.633712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.633922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.633940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.643180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.643419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:18520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.643438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.652654] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.652883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:21315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.652901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.662141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.662375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.662392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.671591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.671822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.671840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.681023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.681260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.681279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.690515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.690746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:15515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.690764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.699911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.700156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.700174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.709231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.709463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.709481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.718697] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.718932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.718951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.727951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.728178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:4672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.728196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.737170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.737401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:10098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.756 [2024-07-23 14:10:50.737418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.756 [2024-07-23 14:10:50.746365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.756 [2024-07-23 14:10:50.746590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.757 [2024-07-23 14:10:50.746608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.757 [2024-07-23 14:10:50.755515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.757 [2024-07-23 14:10:50.755739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.757 [2024-07-23 14:10:50.755756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.757 [2024-07-23 14:10:50.764687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a22a0) with pdu=0x2000190f0788 00:28:59.757 [2024-07-23 14:10:50.764933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:59.757 [2024-07-23 14:10:50.764951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:28:59.757 00:28:59.757 Latency(us) 00:28:59.757 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.757 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.757 nvme0n1 : 2.00 26790.04 104.65 0.00 0.00 4770.16 2265.27 27810.06 00:28:59.757 =================================================================================================================== 00:28:59.757 Total : 26790.04 104.65 0.00 0.00 4770.16 2265.27 27810.06 00:29:00.016 0 00:29:00.016 14:10:50 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:00.016 14:10:50 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:00.016 14:10:50 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:00.016 | .driver_specific 00:29:00.016 | .nvme_error 00:29:00.016 | .status_code 00:29:00.016 | .command_transient_transport_error' 00:29:00.016 14:10:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:00.016 14:10:50 -- host/digest.sh@71 -- # (( 210 > 0 )) 00:29:00.016 14:10:50 -- host/digest.sh@73 -- # killprocess 3424883 00:29:00.016 14:10:50 -- common/autotest_common.sh@926 -- # '[' -z 3424883 ']' 00:29:00.016 14:10:50 -- common/autotest_common.sh@930 -- # kill -0 3424883 00:29:00.016 14:10:50 -- common/autotest_common.sh@931 -- # uname 00:29:00.016 14:10:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:00.016 14:10:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3424883 00:29:00.016 14:10:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:00.016 14:10:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:00.016 14:10:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3424883' 00:29:00.016 killing process with pid 3424883 00:29:00.016 14:10:50 -- common/autotest_common.sh@945 -- # kill 3424883 00:29:00.016 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.016 00:29:00.016 Latency(us) 00:29:00.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.016 =================================================================================================================== 00:29:00.016 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.016 14:10:50 -- common/autotest_common.sh@950 -- # wait 3424883 00:29:00.275 14:10:51 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:00.275 14:10:51 -- host/digest.sh@54 -- # local rw bs qd 00:29:00.275 14:10:51 -- host/digest.sh@56 -- # rw=randwrite 00:29:00.275 14:10:51 -- host/digest.sh@56 -- # bs=131072 00:29:00.275 14:10:51 -- host/digest.sh@56 -- # qd=16 00:29:00.275 14:10:51 -- host/digest.sh@58 -- # bperfpid=3425425 00:29:00.275 14:10:51 -- host/digest.sh@60 -- # waitforlisten 3425425 /var/tmp/bperf.sock 00:29:00.275 14:10:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:00.275 14:10:51 -- common/autotest_common.sh@819 -- # '[' -z 3425425 ']' 00:29:00.275 14:10:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.275 14:10:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:00.275 14:10:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.275 14:10:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:00.275 14:10:51 -- common/autotest_common.sh@10 -- # set +x 00:29:00.275 [2024-07-23 14:10:51.246921] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:00.275 [2024-07-23 14:10:51.246967] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425425 ] 00:29:00.275 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.275 Zero copy mechanism will not be used. 00:29:00.275 EAL: No free 2048 kB hugepages reported on node 1 00:29:00.534 [2024-07-23 14:10:51.299920] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.534 [2024-07-23 14:10:51.377897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.100 14:10:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:01.100 14:10:52 -- common/autotest_common.sh@852 -- # return 0 00:29:01.100 14:10:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.100 14:10:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.358 14:10:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:01.358 14:10:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.358 14:10:52 -- common/autotest_common.sh@10 -- # set +x 00:29:01.358 14:10:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.359 14:10:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.359 14:10:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.618 nvme0n1 00:29:01.618 14:10:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:01.618 14:10:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:01.618 14:10:52 -- common/autotest_common.sh@10 -- # set +x 00:29:01.618 14:10:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:01.618 14:10:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:01.618 14:10:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.618 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.618 Zero copy mechanism will not be used. 00:29:01.618 Running I/O for 2 seconds... 00:29:01.618 [2024-07-23 14:10:52.625418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.618 [2024-07-23 14:10:52.625814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.618 [2024-07-23 14:10:52.625840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.641804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.642220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.642242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.657270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.657719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.657740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.674064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.674460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.689712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.690162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.690182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.707682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.708145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.708165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.724851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.725220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.725239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.740890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.741198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.878 [2024-07-23 14:10:52.741216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.878 [2024-07-23 14:10:52.757730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.878 [2024-07-23 14:10:52.758230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.758249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.774679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.775269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.775288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.792617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.793208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.793226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.810303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.810747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.810766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.827930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.828387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.828406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.845745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.846062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.846080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.864224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.864695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.864713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:01.879 [2024-07-23 14:10:52.882787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:01.879 [2024-07-23 14:10:52.883389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:01.879 [2024-07-23 14:10:52.883407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.899917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.900506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.900525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.915642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.915971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.915990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.931396] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.931983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.932001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.949730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.950318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.950336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.966117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.966684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.966702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.983357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.983772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.983791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:52.999179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:52.999733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:52.999752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.015164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.015707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.015725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.030766] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.031076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.031094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.047468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.047939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.047961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.064515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.064926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.064945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.078810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.079496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.079514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.096705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.097114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.097133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.111856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.112160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.112178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.128907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.129392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.129411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.139 [2024-07-23 14:10:53.145309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.139 [2024-07-23 14:10:53.145758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.139 [2024-07-23 14:10:53.145777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.399 [2024-07-23 14:10:53.162583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.399 [2024-07-23 14:10:53.162996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.399 [2024-07-23 14:10:53.163016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.399 [2024-07-23 14:10:53.179011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.179383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.179401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.196343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.196738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.196756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.212739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.213158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.213177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.230511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.230941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.230960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.246571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.247195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.247213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.264511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.264981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.264999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.282430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.282955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.282974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.300010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.300431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.300449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.316286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.316792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.316811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.332564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.333137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.333155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.350181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.350619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.350638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.367976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.368255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.368274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.385297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.385930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.385949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.400 [2024-07-23 14:10:53.401518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.400 [2024-07-23 14:10:53.401953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.400 [2024-07-23 14:10:53.401972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.416606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.417156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.417176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.431591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.432217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.432236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.447142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.447505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.447524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.463841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.464318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.464336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.481801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.482217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.482240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.498468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.498785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.498803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.515157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.515656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.515673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.532097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.532660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.532678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.550289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.550957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.550975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.566736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.567237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.567256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.583847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.584198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.584216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.602088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.602612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.602631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.619497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.619992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.620011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.637723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.638135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.638153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.654462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.655080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.655099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.659 [2024-07-23 14:10:53.670714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.659 [2024-07-23 14:10:53.671241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.659 [2024-07-23 14:10:53.671260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.686842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.687397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.687415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.702418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.702768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.702786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.719836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.720144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.720163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.736665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.736994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.737012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.753792] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.754380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.754399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.771013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.771380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.771402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.788334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.788773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.788792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.805823] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.806328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.806346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.824908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.825504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.825522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.842402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.842950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.842968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.859906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.860400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.860419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.877585] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.878020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.878038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.892632] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.892988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.893006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.910732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.911149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.911167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:02.919 [2024-07-23 14:10:53.926573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:02.919 [2024-07-23 14:10:53.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.919 [2024-07-23 14:10:53.927085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.179 [2024-07-23 14:10:53.944439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.179 [2024-07-23 14:10:53.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.179 [2024-07-23 14:10:53.944876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.179 [2024-07-23 14:10:53.962226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.179 [2024-07-23 14:10:53.962978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.179 [2024-07-23 14:10:53.962996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.179 [2024-07-23 14:10:53.979855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.179 [2024-07-23 14:10:53.980256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.179 [2024-07-23 14:10:53.980275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.179 [2024-07-23 14:10:53.996426] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.179 [2024-07-23 14:10:53.996766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.179 [2024-07-23 14:10:53.996785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.014091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.014633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.014652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.031730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.032092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.032111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.049317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.049779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.049797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.067787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.068349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.068367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.085299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.085542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.085561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.101857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.102534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.102554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.119761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.120575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.120596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.137974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.138275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.155096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.155466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.155485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.172676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.173055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.173074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.180 [2024-07-23 14:10:54.190650] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.180 [2024-07-23 14:10:54.191091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.180 [2024-07-23 14:10:54.191110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.208573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.209057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.209075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.226576] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.227028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.227055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.245014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.245570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.245590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.261850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.262437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.262456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.279700] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.280262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.280281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.298870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.299328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.299347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.316778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.317329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.317347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.333742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.334270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.334288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.354131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.354487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.354506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.372640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.373303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.373322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.391698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.392236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.392255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.409695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.410329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.410348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.427228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.427735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.427753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.440 [2024-07-23 14:10:54.444530] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.440 [2024-07-23 14:10:54.444973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.440 [2024-07-23 14:10:54.444991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.461881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.462388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.462406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.479606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.479953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.479971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.496887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.497321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.497339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.513599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.514088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.514107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.528971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.529660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.529679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.544345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.544974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.544993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.560582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.560857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.560875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.576272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.576589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.576607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.700 [2024-07-23 14:10:54.592586] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x14a2440) with pdu=0x2000190fef90 00:29:03.700 [2024-07-23 14:10:54.593006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:03.700 [2024-07-23 14:10:54.593024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.700 00:29:03.700 Latency(us) 00:29:03.700 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.700 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:03.700 nvme0n1 : 2.01 1799.17 224.90 0.00 0.00 8872.33 5413.84 26556.33 00:29:03.700 =================================================================================================================== 00:29:03.700 Total : 1799.17 224.90 0.00 0.00 8872.33 5413.84 26556.33 00:29:03.700 0 00:29:03.700 14:10:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:03.700 14:10:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:03.700 14:10:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:03.700 | .driver_specific 00:29:03.700 | .nvme_error 00:29:03.700 | .status_code 00:29:03.700 | .command_transient_transport_error' 00:29:03.700 14:10:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:03.960 14:10:54 -- host/digest.sh@71 -- # (( 116 > 0 )) 00:29:03.960 14:10:54 -- host/digest.sh@73 -- # killprocess 3425425 00:29:03.960 14:10:54 -- common/autotest_common.sh@926 -- # '[' -z 3425425 ']' 00:29:03.960 14:10:54 -- common/autotest_common.sh@930 -- # kill -0 3425425 00:29:03.960 14:10:54 -- common/autotest_common.sh@931 -- # uname 00:29:03.960 14:10:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:03.960 14:10:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3425425 00:29:03.960 14:10:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:03.960 14:10:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:03.960 14:10:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3425425' 00:29:03.960 killing process with pid 3425425 00:29:03.960 14:10:54 -- common/autotest_common.sh@945 -- # kill 3425425 00:29:03.960 Received shutdown signal, test time was about 2.000000 seconds 00:29:03.960 00:29:03.960 Latency(us) 00:29:03.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.960 =================================================================================================================== 00:29:03.960 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:03.960 14:10:54 -- common/autotest_common.sh@950 -- # wait 3425425 00:29:04.220 14:10:55 -- host/digest.sh@115 -- # killprocess 3423298 00:29:04.220 14:10:55 -- common/autotest_common.sh@926 -- # '[' -z 3423298 ']' 00:29:04.220 14:10:55 -- common/autotest_common.sh@930 -- # kill -0 3423298 00:29:04.220 14:10:55 -- common/autotest_common.sh@931 -- # uname 00:29:04.220 14:10:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:04.220 14:10:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3423298 00:29:04.220 14:10:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:04.220 14:10:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:04.220 14:10:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3423298' 00:29:04.220 killing process with pid 3423298 00:29:04.220 14:10:55 -- common/autotest_common.sh@945 -- # kill 3423298 00:29:04.220 14:10:55 -- common/autotest_common.sh@950 -- # wait 3423298 00:29:04.480 00:29:04.480 real 0m16.928s 00:29:04.480 user 0m33.172s 00:29:04.480 sys 0m3.581s 00:29:04.480 14:10:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:04.480 14:10:55 -- common/autotest_common.sh@10 -- # set +x 00:29:04.480 ************************************ 00:29:04.480 END TEST nvmf_digest_error 00:29:04.480 ************************************ 00:29:04.480 14:10:55 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:04.480 14:10:55 -- host/digest.sh@139 -- # nvmftestfini 00:29:04.480 14:10:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:04.480 14:10:55 -- nvmf/common.sh@116 -- # sync 00:29:04.480 14:10:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:04.480 14:10:55 -- nvmf/common.sh@119 -- # set +e 00:29:04.480 14:10:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:04.480 14:10:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:04.480 rmmod nvme_tcp 00:29:04.480 rmmod nvme_fabrics 00:29:04.480 rmmod nvme_keyring 00:29:04.480 14:10:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:04.480 14:10:55 -- nvmf/common.sh@123 -- # set -e 00:29:04.480 14:10:55 -- nvmf/common.sh@124 -- # return 0 00:29:04.480 14:10:55 -- nvmf/common.sh@477 -- # '[' -n 3423298 ']' 00:29:04.480 14:10:55 -- nvmf/common.sh@478 -- # killprocess 3423298 00:29:04.480 14:10:55 -- common/autotest_common.sh@926 -- # '[' -z 3423298 ']' 00:29:04.480 14:10:55 -- common/autotest_common.sh@930 -- # kill -0 3423298 00:29:04.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3423298) - No such process 00:29:04.480 14:10:55 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3423298 is not found' 00:29:04.480 Process with pid 3423298 is not found 00:29:04.480 14:10:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:04.480 14:10:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:04.480 14:10:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:04.480 14:10:55 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:04.480 14:10:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:04.480 14:10:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.480 14:10:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:04.480 14:10:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.019 14:10:57 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:07.019 00:29:07.019 real 0m41.760s 00:29:07.019 user 1m8.402s 00:29:07.019 sys 0m11.332s 00:29:07.019 14:10:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:07.019 14:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:07.019 ************************************ 00:29:07.019 END TEST nvmf_digest 00:29:07.019 ************************************ 00:29:07.019 14:10:57 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:07.019 14:10:57 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:07.019 14:10:57 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:29:07.019 14:10:57 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:07.019 14:10:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:07.019 14:10:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:07.019 14:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:07.019 ************************************ 00:29:07.019 START TEST nvmf_bdevperf 00:29:07.019 ************************************ 00:29:07.019 14:10:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:07.019 * Looking for test storage... 00:29:07.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:07.019 14:10:57 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.019 14:10:57 -- nvmf/common.sh@7 -- # uname -s 00:29:07.019 14:10:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.019 14:10:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.019 14:10:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.019 14:10:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.019 14:10:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.019 14:10:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.020 14:10:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.020 14:10:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.020 14:10:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.020 14:10:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.020 14:10:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:07.020 14:10:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:07.020 14:10:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.020 14:10:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.020 14:10:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.020 14:10:57 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.020 14:10:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.020 14:10:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.020 14:10:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.020 14:10:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.020 14:10:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.020 14:10:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.020 14:10:57 -- paths/export.sh@5 -- # export PATH 00:29:07.020 14:10:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.020 14:10:57 -- nvmf/common.sh@46 -- # : 0 00:29:07.020 14:10:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:07.020 14:10:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:07.020 14:10:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:07.020 14:10:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.020 14:10:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.020 14:10:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:07.020 14:10:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:07.020 14:10:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:07.020 14:10:57 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:07.020 14:10:57 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:07.020 14:10:57 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:07.020 14:10:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:07.020 14:10:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.020 14:10:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:07.020 14:10:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:07.020 14:10:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:07.020 14:10:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.020 14:10:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:07.020 14:10:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.020 14:10:57 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:07.020 14:10:57 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:07.020 14:10:57 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:07.020 14:10:57 -- common/autotest_common.sh@10 -- # set +x 00:29:12.294 14:11:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:12.294 14:11:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:12.294 14:11:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:12.294 14:11:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:12.294 14:11:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:12.294 14:11:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:12.294 14:11:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:12.294 14:11:02 -- nvmf/common.sh@294 -- # net_devs=() 00:29:12.294 14:11:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:12.294 14:11:02 -- nvmf/common.sh@295 -- # e810=() 00:29:12.294 14:11:02 -- nvmf/common.sh@295 -- # local -ga e810 00:29:12.294 14:11:02 -- nvmf/common.sh@296 -- # x722=() 00:29:12.294 14:11:02 -- nvmf/common.sh@296 -- # local -ga x722 00:29:12.294 14:11:02 -- nvmf/common.sh@297 -- # mlx=() 00:29:12.294 14:11:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:12.294 14:11:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:12.294 14:11:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:12.294 14:11:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:12.294 14:11:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:12.294 14:11:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:12.294 14:11:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:12.294 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:12.294 14:11:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:12.294 14:11:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:12.294 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:12.294 14:11:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:12.294 14:11:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:12.294 14:11:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.294 14:11:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:12.294 14:11:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.294 14:11:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:12.294 Found net devices under 0000:86:00.0: cvl_0_0 00:29:12.294 14:11:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.294 14:11:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:12.294 14:11:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:12.294 14:11:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:12.294 14:11:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:12.294 14:11:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:12.294 Found net devices under 0000:86:00.1: cvl_0_1 00:29:12.294 14:11:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:12.294 14:11:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:12.294 14:11:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:12.294 14:11:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:12.294 14:11:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:12.294 14:11:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:12.294 14:11:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:12.294 14:11:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:12.294 14:11:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:12.294 14:11:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:12.294 14:11:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:12.294 14:11:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:12.294 14:11:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:12.294 14:11:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:12.294 14:11:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:12.294 14:11:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:12.294 14:11:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:12.294 14:11:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:12.294 14:11:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:12.294 14:11:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:12.294 14:11:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:12.294 14:11:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:12.294 14:11:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:12.295 14:11:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:12.295 14:11:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:12.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:12.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:29:12.295 00:29:12.295 --- 10.0.0.2 ping statistics --- 00:29:12.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.295 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:29:12.295 14:11:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:12.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:12.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:29:12.295 00:29:12.295 --- 10.0.0.1 ping statistics --- 00:29:12.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:12.295 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:29:12.295 14:11:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:12.295 14:11:02 -- nvmf/common.sh@410 -- # return 0 00:29:12.295 14:11:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:12.295 14:11:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:12.295 14:11:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:12.295 14:11:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:12.295 14:11:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:12.295 14:11:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:12.295 14:11:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:12.295 14:11:02 -- host/bdevperf.sh@25 -- # tgt_init 00:29:12.295 14:11:02 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:12.295 14:11:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:12.295 14:11:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:12.295 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:29:12.295 14:11:02 -- nvmf/common.sh@469 -- # nvmfpid=3429591 00:29:12.295 14:11:02 -- nvmf/common.sh@470 -- # waitforlisten 3429591 00:29:12.295 14:11:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:12.295 14:11:02 -- common/autotest_common.sh@819 -- # '[' -z 3429591 ']' 00:29:12.295 14:11:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.295 14:11:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.295 14:11:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.295 14:11:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.295 14:11:02 -- common/autotest_common.sh@10 -- # set +x 00:29:12.295 [2024-07-23 14:11:02.935999] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:12.295 [2024-07-23 14:11:02.936040] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.295 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.295 [2024-07-23 14:11:02.992391] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:12.295 [2024-07-23 14:11:03.071911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:12.295 [2024-07-23 14:11:03.072021] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.295 [2024-07-23 14:11:03.072029] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.295 [2024-07-23 14:11:03.072035] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.295 [2024-07-23 14:11:03.072147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:12.295 [2024-07-23 14:11:03.072173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.295 [2024-07-23 14:11:03.072174] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.863 14:11:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:12.863 14:11:03 -- common/autotest_common.sh@852 -- # return 0 00:29:12.863 14:11:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:12.863 14:11:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:12.863 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:29:12.863 14:11:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.863 14:11:03 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.863 14:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.863 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:29:12.863 [2024-07-23 14:11:03.785093] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.863 14:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:12.863 14:11:03 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.863 14:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.863 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:29:12.863 Malloc0 00:29:12.863 14:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:12.863 14:11:03 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.863 14:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.863 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:29:12.863 14:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:12.863 14:11:03 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:12.863 14:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.863 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:29:12.863 14:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:12.863 14:11:03 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.863 14:11:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.863 14:11:03 -- common/autotest_common.sh@10 -- # set +x 00:29:12.863 [2024-07-23 14:11:03.839788] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.863 14:11:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:12.863 14:11:03 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:12.863 14:11:03 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:12.863 14:11:03 -- nvmf/common.sh@520 -- # config=() 00:29:12.863 14:11:03 -- nvmf/common.sh@520 -- # local subsystem config 00:29:12.863 14:11:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:12.863 14:11:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:12.863 { 00:29:12.863 "params": { 00:29:12.863 "name": "Nvme$subsystem", 00:29:12.863 "trtype": "$TEST_TRANSPORT", 00:29:12.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:12.863 "adrfam": "ipv4", 00:29:12.863 "trsvcid": "$NVMF_PORT", 00:29:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:12.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:12.863 "hdgst": ${hdgst:-false}, 00:29:12.863 "ddgst": ${ddgst:-false} 00:29:12.863 }, 00:29:12.863 "method": "bdev_nvme_attach_controller" 00:29:12.863 } 00:29:12.863 EOF 00:29:12.863 )") 00:29:12.863 14:11:03 -- nvmf/common.sh@542 -- # cat 00:29:12.863 14:11:03 -- nvmf/common.sh@544 -- # jq . 00:29:12.863 14:11:03 -- nvmf/common.sh@545 -- # IFS=, 00:29:12.863 14:11:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:12.863 "params": { 00:29:12.863 "name": "Nvme1", 00:29:12.863 "trtype": "tcp", 00:29:12.863 "traddr": "10.0.0.2", 00:29:12.863 "adrfam": "ipv4", 00:29:12.863 "trsvcid": "4420", 00:29:12.863 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.863 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:12.863 "hdgst": false, 00:29:12.863 "ddgst": false 00:29:12.863 }, 00:29:12.863 "method": "bdev_nvme_attach_controller" 00:29:12.863 }' 00:29:13.123 [2024-07-23 14:11:03.883987] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:13.123 [2024-07-23 14:11:03.884029] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429718 ] 00:29:13.123 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.123 [2024-07-23 14:11:03.938173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.123 [2024-07-23 14:11:04.009566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.382 Running I/O for 1 seconds... 00:29:14.320 00:29:14.320 Latency(us) 00:29:14.320 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.320 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:14.320 Verification LBA range: start 0x0 length 0x4000 00:29:14.320 Nvme1n1 : 1.00 16287.77 63.62 0.00 0.00 7829.53 890.43 25530.55 00:29:14.320 =================================================================================================================== 00:29:14.320 Total : 16287.77 63.62 0.00 0.00 7829.53 890.43 25530.55 00:29:14.580 14:11:05 -- host/bdevperf.sh@30 -- # bdevperfpid=3429954 00:29:14.580 14:11:05 -- host/bdevperf.sh@32 -- # sleep 3 00:29:14.580 14:11:05 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:14.580 14:11:05 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:14.580 14:11:05 -- nvmf/common.sh@520 -- # config=() 00:29:14.580 14:11:05 -- nvmf/common.sh@520 -- # local subsystem config 00:29:14.580 14:11:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:14.580 14:11:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:14.580 { 00:29:14.580 "params": { 00:29:14.580 "name": "Nvme$subsystem", 00:29:14.580 "trtype": "$TEST_TRANSPORT", 00:29:14.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:14.580 "adrfam": "ipv4", 00:29:14.580 "trsvcid": "$NVMF_PORT", 00:29:14.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:14.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:14.580 "hdgst": ${hdgst:-false}, 00:29:14.580 "ddgst": ${ddgst:-false} 00:29:14.580 }, 00:29:14.580 "method": "bdev_nvme_attach_controller" 00:29:14.580 } 00:29:14.580 EOF 00:29:14.580 )") 00:29:14.580 14:11:05 -- nvmf/common.sh@542 -- # cat 00:29:14.580 14:11:05 -- nvmf/common.sh@544 -- # jq . 00:29:14.580 14:11:05 -- nvmf/common.sh@545 -- # IFS=, 00:29:14.580 14:11:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:14.580 "params": { 00:29:14.580 "name": "Nvme1", 00:29:14.580 "trtype": "tcp", 00:29:14.580 "traddr": "10.0.0.2", 00:29:14.580 "adrfam": "ipv4", 00:29:14.580 "trsvcid": "4420", 00:29:14.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:14.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:14.580 "hdgst": false, 00:29:14.580 "ddgst": false 00:29:14.580 }, 00:29:14.580 "method": "bdev_nvme_attach_controller" 00:29:14.580 }' 00:29:14.580 [2024-07-23 14:11:05.540589] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:14.580 [2024-07-23 14:11:05.540637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3429954 ] 00:29:14.580 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.580 [2024-07-23 14:11:05.596927] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.839 [2024-07-23 14:11:05.665062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.098 Running I/O for 15 seconds... 00:29:17.639 14:11:08 -- host/bdevperf.sh@33 -- # kill -9 3429591 00:29:17.639 14:11:08 -- host/bdevperf.sh@35 -- # sleep 3 00:29:17.639 [2024-07-23 14:11:08.512812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.639 [2024-07-23 14:11:08.512846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.639 [2024-07-23 14:11:08.512863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.639 [2024-07-23 14:11:08.512871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.639 [2024-07-23 14:11:08.512880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:76728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.639 [2024-07-23 14:11:08.512889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.639 [2024-07-23 14:11:08.512898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.639 [2024-07-23 14:11:08.512905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.639 [2024-07-23 14:11:08.512914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.639 [2024-07-23 14:11:08.512920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.639 [2024-07-23 14:11:08.512928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.512936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.512944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.512951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.512961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.512968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.512977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.512983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.512998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:76792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:76904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:76944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:76984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:76992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:77016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:76384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:76400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:76408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:77040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.640 [2024-07-23 14:11:08.513582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.640 [2024-07-23 14:11:08.513606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:76432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.640 [2024-07-23 14:11:08.513613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:76456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:76496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:76512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:77056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.513976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.513984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.513993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.514062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.514093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.641 [2024-07-23 14:11:08.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:76544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:76592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.641 [2024-07-23 14:11:08.514199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.641 [2024-07-23 14:11:08.514207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:76664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:76768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:76784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:77320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:77336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:17.642 [2024-07-23 14:11:08.514769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:76848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.642 [2024-07-23 14:11:08.514798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.642 [2024-07-23 14:11:08.514806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.643 [2024-07-23 14:11:08.514815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.643 [2024-07-23 14:11:08.514824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.643 [2024-07-23 14:11:08.514831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.643 [2024-07-23 14:11:08.514839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.643 [2024-07-23 14:11:08.514845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.643 [2024-07-23 14:11:08.514853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:76920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.643 [2024-07-23 14:11:08.514860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.643 [2024-07-23 14:11:08.514868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.643 [2024-07-23 14:11:08.514875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.643 [2024-07-23 14:11:08.514883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafcb80 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.514891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:17.643 [2024-07-23 14:11:08.514896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:17.643 [2024-07-23 14:11:08.514903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76936 len:8 PRP1 0x0 PRP2 0x0 00:29:17.643 [2024-07-23 14:11:08.514909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:17.643 [2024-07-23 14:11:08.514950] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xafcb80 was disconnected and freed. reset controller. 00:29:17.643 [2024-07-23 14:11:08.516776] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.516825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.518127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.518472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.518506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.518531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.519029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.519267] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.519278] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.519287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.521145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.529019] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.529498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.529917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.529957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.529981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.530479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.530816] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.530842] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.530863] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.532957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.540999] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.541484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.541859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.541891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.541915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.542281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.542412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.542421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.542429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.544204] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.552930] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.553397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.553805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.553837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.553859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.554156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.554366] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.554376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.554382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.556064] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.564915] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.565608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.566016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.566060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.566091] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.566475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.566571] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.566580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.566586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.568230] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.576979] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.577468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.577820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.577852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.577874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.578090] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.578186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.578195] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.578203] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.580071] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.588983] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.589466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.589822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.589854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.643 [2024-07-23 14:11:08.589875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.643 [2024-07-23 14:11:08.590029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.643 [2024-07-23 14:11:08.590123] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.643 [2024-07-23 14:11:08.590133] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.643 [2024-07-23 14:11:08.590140] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.643 [2024-07-23 14:11:08.591896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.643 [2024-07-23 14:11:08.600969] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.643 [2024-07-23 14:11:08.601422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.601876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.643 [2024-07-23 14:11:08.601907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.644 [2024-07-23 14:11:08.601930] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.644 [2024-07-23 14:11:08.602332] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.644 [2024-07-23 14:11:08.602501] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.644 [2024-07-23 14:11:08.602511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.644 [2024-07-23 14:11:08.602517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.644 [2024-07-23 14:11:08.604337] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.644 [2024-07-23 14:11:08.612837] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.644 [2024-07-23 14:11:08.613345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.613714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.613746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.644 [2024-07-23 14:11:08.613768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.644 [2024-07-23 14:11:08.614116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.644 [2024-07-23 14:11:08.614317] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.644 [2024-07-23 14:11:08.614326] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.644 [2024-07-23 14:11:08.614332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.644 [2024-07-23 14:11:08.616011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.644 [2024-07-23 14:11:08.624692] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.644 [2024-07-23 14:11:08.625241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.625399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.625431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.644 [2024-07-23 14:11:08.625453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.644 [2024-07-23 14:11:08.625938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.644 [2024-07-23 14:11:08.626135] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.644 [2024-07-23 14:11:08.626148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.644 [2024-07-23 14:11:08.626158] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.644 [2024-07-23 14:11:08.628767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.644 [2024-07-23 14:11:08.637041] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.644 [2024-07-23 14:11:08.637591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.638075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.638109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.644 [2024-07-23 14:11:08.638138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.644 [2024-07-23 14:11:08.638205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.644 [2024-07-23 14:11:08.638320] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.644 [2024-07-23 14:11:08.638329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.644 [2024-07-23 14:11:08.638335] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.644 [2024-07-23 14:11:08.640159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.644 [2024-07-23 14:11:08.649011] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.644 [2024-07-23 14:11:08.649556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.649920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.644 [2024-07-23 14:11:08.649951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.644 [2024-07-23 14:11:08.649973] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.644 [2024-07-23 14:11:08.650219] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.644 [2024-07-23 14:11:08.650430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.644 [2024-07-23 14:11:08.650439] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.644 [2024-07-23 14:11:08.650446] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.644 [2024-07-23 14:11:08.652335] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.905 [2024-07-23 14:11:08.661021] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.905 [2024-07-23 14:11:08.661498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.661854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.661885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.905 [2024-07-23 14:11:08.661907] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.905 [2024-07-23 14:11:08.662104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.905 [2024-07-23 14:11:08.662487] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.905 [2024-07-23 14:11:08.662513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.905 [2024-07-23 14:11:08.662538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.905 [2024-07-23 14:11:08.664190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.905 [2024-07-23 14:11:08.672971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.905 [2024-07-23 14:11:08.673403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.673980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.674011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.905 [2024-07-23 14:11:08.674033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.905 [2024-07-23 14:11:08.674339] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.905 [2024-07-23 14:11:08.674434] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.905 [2024-07-23 14:11:08.674444] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.905 [2024-07-23 14:11:08.674453] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.905 [2024-07-23 14:11:08.676112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.905 [2024-07-23 14:11:08.684908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.905 [2024-07-23 14:11:08.685468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.685872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.685905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.905 [2024-07-23 14:11:08.685927] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.905 [2024-07-23 14:11:08.686371] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.905 [2024-07-23 14:11:08.686680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.905 [2024-07-23 14:11:08.686693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.905 [2024-07-23 14:11:08.686703] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.905 [2024-07-23 14:11:08.689379] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.905 [2024-07-23 14:11:08.697402] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.905 [2024-07-23 14:11:08.697868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.698099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.698132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.905 [2024-07-23 14:11:08.698155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.905 [2024-07-23 14:11:08.698368] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.905 [2024-07-23 14:11:08.698454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.905 [2024-07-23 14:11:08.698463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.905 [2024-07-23 14:11:08.698470] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.905 [2024-07-23 14:11:08.700211] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.905 [2024-07-23 14:11:08.709229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.905 [2024-07-23 14:11:08.709756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.710170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.905 [2024-07-23 14:11:08.710204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.905 [2024-07-23 14:11:08.710226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.905 [2024-07-23 14:11:08.710427] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.710528] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.710538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.710547] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.712260] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.721107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.721432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.721797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.721828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.721850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.722244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.722399] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.722408] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.722415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.724193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.733061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.733626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.734065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.734098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.734121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.734352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.734550] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.734559] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.734566] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.736328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.744796] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.745792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.746250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.746288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.746313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.746658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.746835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.746845] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.746853] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.748472] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.756608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.757110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.757499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.757510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.757518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.757640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.757763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.757772] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.757778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.759520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.768381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.768830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.769293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.769326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.769349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.769631] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.769923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.769932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.769939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.771805] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.780413] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.780839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.781283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.781317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.781339] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.781722] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.781869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.781879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.781886] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.783737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.792361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.792805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.793231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.793264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.793287] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.793667] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.793914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.793923] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.793929] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.795723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.804342] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.804984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.805420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.805453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.805475] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.805650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.805768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.805777] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.805785] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.906 [2024-07-23 14:11:08.807517] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.906 [2024-07-23 14:11:08.816314] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.906 [2024-07-23 14:11:08.816889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.817360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.906 [2024-07-23 14:11:08.817393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.906 [2024-07-23 14:11:08.817415] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.906 [2024-07-23 14:11:08.817796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.906 [2024-07-23 14:11:08.817929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.906 [2024-07-23 14:11:08.817941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.906 [2024-07-23 14:11:08.817951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.820520] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.828854] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.829440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.829860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.829893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.829916] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.830146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.830262] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.830271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.830278] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.832149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.840700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.841273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.841729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.841762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.841784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.841926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.842021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.842031] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.842037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.843869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.852580] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.853167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.853650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.853682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.853704] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.853920] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.853988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.853997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.854003] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.855739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.864446] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.865006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.865533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.865568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.865598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.865782] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.865905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.865914] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.865921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.867645] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.876424] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.877006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.877431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.877464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.877487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.877867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.878049] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.878059] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.878065] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.879838] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.888415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.888944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.889330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.889368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.889390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.889923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.890140] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.890151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.890157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.891852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.900143] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.900596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.901089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.901123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.901146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.901585] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.901895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.901904] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.901911] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.903652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:17.907 [2024-07-23 14:11:08.912198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:17.907 [2024-07-23 14:11:08.912636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.913072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:17.907 [2024-07-23 14:11:08.913105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:17.907 [2024-07-23 14:11:08.913129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:17.907 [2024-07-23 14:11:08.913511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:17.907 [2024-07-23 14:11:08.914052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:17.907 [2024-07-23 14:11:08.914078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:17.907 [2024-07-23 14:11:08.914111] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:17.907 [2024-07-23 14:11:08.915877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.924202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.924801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.925239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.925273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.925297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.925679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.926173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.926205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.926213] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.927928] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.936071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.936579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.937069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.937102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.937124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.937248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.937346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.937355] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.937361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.939125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.947964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.948551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.948987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.949019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.949041] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.949410] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.949491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.949501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.949507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.952107] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.960335] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.960892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.961388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.961424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.961446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.961879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.962165] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.962175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.962182] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.963874] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.972211] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.972807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.973263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.973309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.973317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.973411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.973520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.973531] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.973537] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.975182] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.983958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.984535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.985056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.985089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.985111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.985541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.985924] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.985949] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.985970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.987935] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:08.995908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:08.996372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.996807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:08.996838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:08.996859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:08.997350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:08.997466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:08.997476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:08.997482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:08.999207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:09.007747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:09.008336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:09.008763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.169 [2024-07-23 14:11:09.008795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.169 [2024-07-23 14:11:09.008818] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.169 [2024-07-23 14:11:09.009214] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.169 [2024-07-23 14:11:09.009412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.169 [2024-07-23 14:11:09.009421] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.169 [2024-07-23 14:11:09.009431] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.169 [2024-07-23 14:11:09.011274] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.169 [2024-07-23 14:11:09.019693] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.169 [2024-07-23 14:11:09.020193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.020660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.020692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.020714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.020944] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.021277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.021287] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.021293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.023161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.031600] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.032199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.032656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.032687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.032710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.033076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.033212] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.033221] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.033227] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.034858] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.043579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.044139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.044643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.044674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.044697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.045145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.045217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.045226] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.045233] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.047085] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.055540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.056107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.056605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.056637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.056659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.057152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.057537] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.057574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.057581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.059286] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.067315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.067847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.068309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.068343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.068365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.068694] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.068804] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.068813] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.068819] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.070376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.079339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.079911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.080388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.080421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.080444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.080923] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.081144] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.081158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.081168] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.083753] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.091742] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.092297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.092800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.092831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.092853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.093249] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.093421] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.093430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.093436] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.095121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.103734] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.104315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.104821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.104853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.104876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.105320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.105606] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.105632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.105653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.107377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.115657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.116262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.116702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.116733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.116755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.170 [2024-07-23 14:11:09.117251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.170 [2024-07-23 14:11:09.117635] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.170 [2024-07-23 14:11:09.117672] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.170 [2024-07-23 14:11:09.117680] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.170 [2024-07-23 14:11:09.119311] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.170 [2024-07-23 14:11:09.127579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.170 [2024-07-23 14:11:09.128129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.128582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.170 [2024-07-23 14:11:09.128613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.170 [2024-07-23 14:11:09.128636] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.171 [2024-07-23 14:11:09.128858] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.171 [2024-07-23 14:11:09.128939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.171 [2024-07-23 14:11:09.128948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.171 [2024-07-23 14:11:09.128954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.171 [2024-07-23 14:11:09.130652] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.171 [2024-07-23 14:11:09.139527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.171 [2024-07-23 14:11:09.140101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.140542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.140573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.171 [2024-07-23 14:11:09.140595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.171 [2024-07-23 14:11:09.141026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.171 [2024-07-23 14:11:09.141323] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.171 [2024-07-23 14:11:09.141349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.171 [2024-07-23 14:11:09.141371] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.171 [2024-07-23 14:11:09.144131] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.171 [2024-07-23 14:11:09.152440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.171 [2024-07-23 14:11:09.152976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.153426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.153459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.171 [2024-07-23 14:11:09.153482] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.171 [2024-07-23 14:11:09.153850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.171 [2024-07-23 14:11:09.153996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.171 [2024-07-23 14:11:09.154005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.171 [2024-07-23 14:11:09.154012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.171 [2024-07-23 14:11:09.155767] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.171 [2024-07-23 14:11:09.164356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.171 [2024-07-23 14:11:09.164895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.165372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.165412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.171 [2024-07-23 14:11:09.165434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.171 [2024-07-23 14:11:09.165582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.171 [2024-07-23 14:11:09.165710] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.171 [2024-07-23 14:11:09.165719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.171 [2024-07-23 14:11:09.165725] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.171 [2024-07-23 14:11:09.167465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.171 [2024-07-23 14:11:09.176180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.171 [2024-07-23 14:11:09.176736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.177226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.171 [2024-07-23 14:11:09.177260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.171 [2024-07-23 14:11:09.177283] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.171 [2024-07-23 14:11:09.177613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.171 [2024-07-23 14:11:09.177815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.171 [2024-07-23 14:11:09.177824] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.171 [2024-07-23 14:11:09.177830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.171 [2024-07-23 14:11:09.179657] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.432 [2024-07-23 14:11:09.188108] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.432 [2024-07-23 14:11:09.188683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.189176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.189209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.432 [2024-07-23 14:11:09.189233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.432 [2024-07-23 14:11:09.189713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.432 [2024-07-23 14:11:09.189879] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.432 [2024-07-23 14:11:09.189889] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.432 [2024-07-23 14:11:09.189895] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.432 [2024-07-23 14:11:09.191751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.432 [2024-07-23 14:11:09.199902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.432 [2024-07-23 14:11:09.200470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.200967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.200997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.432 [2024-07-23 14:11:09.201026] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.432 [2024-07-23 14:11:09.201370] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.432 [2024-07-23 14:11:09.201754] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.432 [2024-07-23 14:11:09.201780] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.432 [2024-07-23 14:11:09.201801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.432 [2024-07-23 14:11:09.203775] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.432 [2024-07-23 14:11:09.211779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.432 [2024-07-23 14:11:09.212367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.212787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.212818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.432 [2024-07-23 14:11:09.212841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.432 [2024-07-23 14:11:09.213283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.432 [2024-07-23 14:11:09.213600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.432 [2024-07-23 14:11:09.213612] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.432 [2024-07-23 14:11:09.213622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.432 [2024-07-23 14:11:09.216386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.432 [2024-07-23 14:11:09.224214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.432 [2024-07-23 14:11:09.224782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.225274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.432 [2024-07-23 14:11:09.225307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.432 [2024-07-23 14:11:09.225330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.432 [2024-07-23 14:11:09.225760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.225911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.225920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.225926] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.227479] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.236194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.236782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.237269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.237301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.237323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.237712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.237975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.237984] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.237991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.239729] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.248038] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.248667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.249145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.249178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.249200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.249581] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.249878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.249888] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.249894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.251686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.259924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.260503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.261009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.261040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.261075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.261356] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.261689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.261715] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.261736] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.263610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.271812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.272349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.272759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.272770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.272778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.272892] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.273047] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.273057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.273064] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.274914] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.283978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.284564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.285016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.285057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.285082] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.285232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.285361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.285370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.285377] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.286952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.295937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.296538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.296938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.296970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.296992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.297433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.297967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.298000] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.298007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.299890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.307679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.308259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.308761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.308792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.308814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.309147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.309278] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.309291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.309298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.311021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.319520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.320089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.320429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.320439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.320446] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.320540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.320621] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.320629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.433 [2024-07-23 14:11:09.320636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.433 [2024-07-23 14:11:09.322431] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.433 [2024-07-23 14:11:09.331449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.433 [2024-07-23 14:11:09.331961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.332446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.433 [2024-07-23 14:11:09.332478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.433 [2024-07-23 14:11:09.332501] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.433 [2024-07-23 14:11:09.332831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.433 [2024-07-23 14:11:09.333183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.433 [2024-07-23 14:11:09.333192] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.333199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.334920] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.343207] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.343769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.344271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.344302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.344323] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.344653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.344855] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.344864] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.344874] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.346581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.355053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.355597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.356094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.356126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.356147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.356293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.356415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.356423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.356429] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.358111] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.366917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.367536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.368014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.368060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.368083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.368364] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.368797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.368822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.368843] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.370904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.378848] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.379401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.379826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.379836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.379843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.379979] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.380111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.380121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.380127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.381702] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.390620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.391191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.391685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.391716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.391738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.391969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.392180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.392190] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.392196] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.393879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.402396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.402989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.403482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.403515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.403536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.404016] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.404255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.404265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.404271] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.406715] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.415064] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.415674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.416058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.416090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.416112] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.416440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.416541] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.416550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.416557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.418278] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.426909] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.427486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.427984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.428015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.428036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.428384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.428766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.428791] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.428813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.430934] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.434 [2024-07-23 14:11:09.438824] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.434 [2024-07-23 14:11:09.439378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.439759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.434 [2024-07-23 14:11:09.439790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.434 [2024-07-23 14:11:09.439812] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.434 [2024-07-23 14:11:09.440154] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.434 [2024-07-23 14:11:09.440325] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.434 [2024-07-23 14:11:09.440335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.434 [2024-07-23 14:11:09.440341] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.434 [2024-07-23 14:11:09.442173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.695 [2024-07-23 14:11:09.450727] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.695 [2024-07-23 14:11:09.451286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.451707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.451738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.695 [2024-07-23 14:11:09.451761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.695 [2024-07-23 14:11:09.452104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.695 [2024-07-23 14:11:09.452247] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.695 [2024-07-23 14:11:09.452256] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.695 [2024-07-23 14:11:09.452263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.695 [2024-07-23 14:11:09.454079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.695 [2024-07-23 14:11:09.462520] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.695 [2024-07-23 14:11:09.463018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.463474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.463506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.695 [2024-07-23 14:11:09.463529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.695 [2024-07-23 14:11:09.463860] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.695 [2024-07-23 14:11:09.464231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.695 [2024-07-23 14:11:09.464246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.695 [2024-07-23 14:11:09.464253] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.695 [2024-07-23 14:11:09.465978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.695 [2024-07-23 14:11:09.474403] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.695 [2024-07-23 14:11:09.474893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.475369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.475401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.695 [2024-07-23 14:11:09.475432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.695 [2024-07-23 14:11:09.475588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.695 [2024-07-23 14:11:09.475683] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.695 [2024-07-23 14:11:09.475692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.695 [2024-07-23 14:11:09.475699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.695 [2024-07-23 14:11:09.478497] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.695 [2024-07-23 14:11:09.486685] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.695 [2024-07-23 14:11:09.487251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.487758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.695 [2024-07-23 14:11:09.487789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.695 [2024-07-23 14:11:09.487811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.695 [2024-07-23 14:11:09.488155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.695 [2024-07-23 14:11:09.488489] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.695 [2024-07-23 14:11:09.488514] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.488535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.490443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.498507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.499072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.499495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.499527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.499556] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.499936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.500282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.500313] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.500319] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.502108] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.510363] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.510945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.511446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.511478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.511499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.511881] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.512318] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.512349] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.512370] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.514163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.522170] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.522777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.523276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.523309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.523330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.523647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.523795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.523804] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.523811] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.525653] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.534261] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.534808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.535238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.535271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.535293] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.535459] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.535562] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.535572] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.535580] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.537329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.546312] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.546904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.547402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.547434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.547456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.547704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.547822] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.547832] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.547838] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.549744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.558309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.558898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.559375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.559408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.559430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.559661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.559914] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.559924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.559930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.561718] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.570304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.570871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.571347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.571381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.571404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.571686] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.571923] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.571932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.571939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.573681] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.582104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.582692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.583099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.583132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.583156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.583635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.583853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.583862] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.583868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.585467] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.696 [2024-07-23 14:11:09.594279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.696 [2024-07-23 14:11:09.594830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.595235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.696 [2024-07-23 14:11:09.595268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.696 [2024-07-23 14:11:09.595291] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.696 [2024-07-23 14:11:09.595622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.696 [2024-07-23 14:11:09.595979] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.696 [2024-07-23 14:11:09.595988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.696 [2024-07-23 14:11:09.595995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.696 [2024-07-23 14:11:09.597807] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.606453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.606987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.607459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.607491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.607513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.607648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.607765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.607778] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.607785] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.609596] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.618542] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.619083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.619499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.619511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.619518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.619660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.619760] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.619769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.619775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.621528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.630388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.630954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.631406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.631438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.631460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.631742] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.631939] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.631948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.631955] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.633695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.642220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.642770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.643240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.643273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.643296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.643626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.643928] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.643938] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.643947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.645678] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.654096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.654674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.655111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.655143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.655164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.655583] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.655693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.655702] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.655708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.657505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.665779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.666312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.666680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.666711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.666732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.667076] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.667235] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.667244] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.667251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.668904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.677644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.678240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.678672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.678704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.678725] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.679070] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.679220] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.679229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.679235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.681049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.689419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.690006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.690451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.690484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.690506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.690836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.697 [2024-07-23 14:11:09.690931] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.697 [2024-07-23 14:11:09.690940] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.697 [2024-07-23 14:11:09.690947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.697 [2024-07-23 14:11:09.692587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.697 [2024-07-23 14:11:09.701229] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.697 [2024-07-23 14:11:09.701841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.702321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.697 [2024-07-23 14:11:09.702355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.697 [2024-07-23 14:11:09.702377] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.697 [2024-07-23 14:11:09.702707] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.698 [2024-07-23 14:11:09.703040] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.698 [2024-07-23 14:11:09.703077] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.698 [2024-07-23 14:11:09.703084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.698 [2024-07-23 14:11:09.704824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.958 [2024-07-23 14:11:09.713247] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.958 [2024-07-23 14:11:09.713726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.958 [2024-07-23 14:11:09.714177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.958 [2024-07-23 14:11:09.714190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.958 [2024-07-23 14:11:09.714197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.958 [2024-07-23 14:11:09.714297] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.958 [2024-07-23 14:11:09.714367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.958 [2024-07-23 14:11:09.714375] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.958 [2024-07-23 14:11:09.714382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.958 [2024-07-23 14:11:09.716174] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.958 [2024-07-23 14:11:09.725031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.958 [2024-07-23 14:11:09.725582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.958 [2024-07-23 14:11:09.726088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.958 [2024-07-23 14:11:09.726115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.958 [2024-07-23 14:11:09.726123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.958 [2024-07-23 14:11:09.726252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.958 [2024-07-23 14:11:09.726367] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.958 [2024-07-23 14:11:09.726377] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.958 [2024-07-23 14:11:09.726383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.958 [2024-07-23 14:11:09.728161] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.958 [2024-07-23 14:11:09.736738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.958 [2024-07-23 14:11:09.737264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.958 [2024-07-23 14:11:09.737771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.958 [2024-07-23 14:11:09.737802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.958 [2024-07-23 14:11:09.737823] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.738300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.738431] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.738441] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.738447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.740881] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.749749] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.750252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.750757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.750789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.750810] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.751208] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.751401] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.751410] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.751417] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.753225] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.761656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.762212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.762644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.762676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.762699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.763031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.763301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.763310] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.763317] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.765035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.773486] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.774041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.774546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.774557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.774564] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.774678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.774763] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.774771] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.774778] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.776465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.785501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.786121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.786495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.786506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.786514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.786678] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.786796] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.786806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.786813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.788561] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.797504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.798080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.798489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.798504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.798512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.798616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.798718] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.798726] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.798733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.800376] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.809503] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.809993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.810423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.810436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.810443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.810590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.810738] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.810748] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.810755] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.812534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.821530] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.822138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.822709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.822720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.822728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.822830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.822962] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.822972] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.822978] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.824832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.833531] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.834170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.834542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.834553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.959 [2024-07-23 14:11:09.834566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.959 [2024-07-23 14:11:09.834654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.959 [2024-07-23 14:11:09.834742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.959 [2024-07-23 14:11:09.834751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.959 [2024-07-23 14:11:09.834758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.959 [2024-07-23 14:11:09.836459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.959 [2024-07-23 14:11:09.845608] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.959 [2024-07-23 14:11:09.846193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.846640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.959 [2024-07-23 14:11:09.846651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.846660] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.846807] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.846910] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.846919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.846925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.848717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.857527] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.858120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.858601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.858633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.858655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.859093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.859426] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.859437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.859447] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.861969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.870139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.870585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.870954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.870984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.871006] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.871457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.871660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.871668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.871675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.873405] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.882127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.882617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.883078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.883110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.883132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.883294] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.883349] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.883357] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.883364] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.885093] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.893992] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.894537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.895072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.895104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.895125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.895506] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.895614] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.895622] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.895628] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.897394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.905912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.906429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.906790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.906820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.906842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.907333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.907727] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.907735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.907742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.909306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.918104] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.918639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.919063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.919097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.919119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.919402] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.919821] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.919830] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.919836] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.922357] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.930818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.931354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.931822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.931853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.931875] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.932166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.932551] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.932575] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.932596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.934443] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.942833] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.943392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.943752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.943784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.943805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.944247] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.960 [2024-07-23 14:11:09.944730] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.960 [2024-07-23 14:11:09.944762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.960 [2024-07-23 14:11:09.944783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.960 [2024-07-23 14:11:09.946505] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.960 [2024-07-23 14:11:09.954791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.960 [2024-07-23 14:11:09.955373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.955734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.960 [2024-07-23 14:11:09.955777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.960 [2024-07-23 14:11:09.955784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.960 [2024-07-23 14:11:09.955877] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.961 [2024-07-23 14:11:09.956012] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.961 [2024-07-23 14:11:09.956019] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.961 [2024-07-23 14:11:09.956025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.961 [2024-07-23 14:11:09.957884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:18.961 [2024-07-23 14:11:09.966729] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:18.961 [2024-07-23 14:11:09.967336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.961 [2024-07-23 14:11:09.967699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:18.961 [2024-07-23 14:11:09.967709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:18.961 [2024-07-23 14:11:09.967717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:18.961 [2024-07-23 14:11:09.967849] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:18.961 [2024-07-23 14:11:09.967997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:18.961 [2024-07-23 14:11:09.968005] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:18.961 [2024-07-23 14:11:09.968012] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:18.961 [2024-07-23 14:11:09.969953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.221 [2024-07-23 14:11:09.978631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.221 [2024-07-23 14:11:09.979098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:09.979461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:09.979471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.221 [2024-07-23 14:11:09.979478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.221 [2024-07-23 14:11:09.979550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.221 [2024-07-23 14:11:09.979652] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.221 [2024-07-23 14:11:09.979660] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.221 [2024-07-23 14:11:09.979670] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.221 [2024-07-23 14:11:09.981509] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.221 [2024-07-23 14:11:09.990724] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.221 [2024-07-23 14:11:09.991086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:09.991424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:09.991434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.221 [2024-07-23 14:11:09.991442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.221 [2024-07-23 14:11:09.991528] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.221 [2024-07-23 14:11:09.991645] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.221 [2024-07-23 14:11:09.991654] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.221 [2024-07-23 14:11:09.991660] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.221 [2024-07-23 14:11:09.993382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.221 [2024-07-23 14:11:10.002817] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.221 [2024-07-23 14:11:10.003432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:10.003867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:10.003878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.221 [2024-07-23 14:11:10.003886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.221 [2024-07-23 14:11:10.003990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.221 [2024-07-23 14:11:10.004072] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.221 [2024-07-23 14:11:10.004080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.221 [2024-07-23 14:11:10.004087] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.221 [2024-07-23 14:11:10.006122] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.221 [2024-07-23 14:11:10.015012] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.221 [2024-07-23 14:11:10.015498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:10.015951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.221 [2024-07-23 14:11:10.015962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.221 [2024-07-23 14:11:10.015970] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.016139] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.016289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.016297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.016304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.018149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.026963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.027561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.027927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.027939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.027947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.028033] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.028129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.028139] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.028146] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.030126] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.039050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.039550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.039910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.039921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.039928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.040052] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.040171] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.040179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.040186] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.042036] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.051289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.051799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.052166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.052177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.052185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.052298] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.052412] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.052420] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.052427] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.054083] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.063232] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.063732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.064091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.064103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.064110] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.064223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.064338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.064346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.064352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.066018] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.075296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.075833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.076191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.076203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.076210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.076309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.076394] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.076402] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.076408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.078199] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.087206] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.087664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.088023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.088067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.088090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.088276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.088393] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.088401] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.088408] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.090321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.099139] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.099667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.100016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.100027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.100035] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.100145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.100248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.100257] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.100263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.101916] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.111112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.111714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.112004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.112014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.222 [2024-07-23 14:11:10.112021] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.222 [2024-07-23 14:11:10.112128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.222 [2024-07-23 14:11:10.112216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.222 [2024-07-23 14:11:10.112223] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.222 [2024-07-23 14:11:10.112230] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.222 [2024-07-23 14:11:10.114003] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.222 [2024-07-23 14:11:10.123149] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.222 [2024-07-23 14:11:10.123640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.222 [2024-07-23 14:11:10.124001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.124011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.124018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.124126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.124243] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.124251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.124258] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.126124] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.135088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.135639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.136078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.136091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.136102] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.136221] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.136338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.136346] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.136352] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.138117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.147062] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.147608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.147959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.147970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.147977] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.148100] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.148203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.148211] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.148218] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.150026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.159075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.159644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.160052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.160063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.160070] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.160157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.160289] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.160297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.160304] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.162306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.170918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.171474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.171883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.171893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.171901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.172020] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.172111] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.172119] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.172126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.173884] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.183065] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.183685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.184053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.184064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.184071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.184188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.184274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.184282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.184288] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.185955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.194996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.195590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.195973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.195984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.195991] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.196143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.196261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.196269] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.196276] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.197988] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.206975] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.207551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.207906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.207916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.207923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.208060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.208180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.208187] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.208194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.209953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.219240] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.219855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.220334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.220367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.220389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.220868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.223 [2024-07-23 14:11:10.221160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.223 [2024-07-23 14:11:10.221185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.223 [2024-07-23 14:11:10.221216] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.223 [2024-07-23 14:11:10.223033] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.223 [2024-07-23 14:11:10.231341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.223 [2024-07-23 14:11:10.231786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.232260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.223 [2024-07-23 14:11:10.232292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.223 [2024-07-23 14:11:10.232313] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.223 [2024-07-23 14:11:10.232513] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.224 [2024-07-23 14:11:10.232616] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.224 [2024-07-23 14:11:10.232624] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.224 [2024-07-23 14:11:10.232630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.224 [2024-07-23 14:11:10.234498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.484 [2024-07-23 14:11:10.243258] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.484 [2024-07-23 14:11:10.243838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.484 [2024-07-23 14:11:10.244310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.484 [2024-07-23 14:11:10.244343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.484 [2024-07-23 14:11:10.244364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.484 [2024-07-23 14:11:10.244796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.484 [2024-07-23 14:11:10.245186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.484 [2024-07-23 14:11:10.245218] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.484 [2024-07-23 14:11:10.245239] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.484 [2024-07-23 14:11:10.247843] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.484 [2024-07-23 14:11:10.255635] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.484 [2024-07-23 14:11:10.256204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.484 [2024-07-23 14:11:10.256672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.484 [2024-07-23 14:11:10.256702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.484 [2024-07-23 14:11:10.256724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.257065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.257301] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.257308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.257315] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.259074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.267612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.268225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.268695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.268726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.268747] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.269241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.269472] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.269480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.269487] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.271248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.279586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.280131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.280609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.280641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.280663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.281105] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.281238] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.281246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.281256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.282896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.291572] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.292038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.292534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.292565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.292587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.292703] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.292820] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.292828] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.292835] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.294559] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.303376] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.303972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.304448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.304482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.304489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.304638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.304755] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.304763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.304769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.307095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.316077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.316598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.317078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.317111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.317132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.317563] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.317736] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.317744] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.317750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.319455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.327933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.328391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.328866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.328897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.328919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.329292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.329379] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.329387] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.329393] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.331214] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.340013] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.340661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.340985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.341016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.341038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.341384] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.341653] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.341661] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.341667] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.343346] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.352068] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.352631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.353100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.353134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.353156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.353637] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.353754] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.353762] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.485 [2024-07-23 14:11:10.353769] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.485 [2024-07-23 14:11:10.355751] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.485 [2024-07-23 14:11:10.363994] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.485 [2024-07-23 14:11:10.364530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.365002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.485 [2024-07-23 14:11:10.365033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.485 [2024-07-23 14:11:10.365068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.485 [2024-07-23 14:11:10.365348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.485 [2024-07-23 14:11:10.365534] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.485 [2024-07-23 14:11:10.365542] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.365548] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.367830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.376644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.377220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.377697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.377727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.377748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.378144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.378397] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.378405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.378411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.380151] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.388621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.389217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.389694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.389724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.389746] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.389973] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.390138] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.390146] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.390152] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.391902] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.400529] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.401226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.401598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.401609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.401617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.401765] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.401913] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.401921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.401927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.403813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.412339] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.412899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.413298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.413331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.413353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.413584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.413878] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.413886] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.413892] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.415595] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.424239] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.424792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.425172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.425204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.425226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.425607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.425916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.425924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.425930] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.427734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.436225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.436653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.437083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.437125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.437148] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.437430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.437862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.437885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.437906] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.439756] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.448146] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.448740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.449141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.449175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.449197] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.449628] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.449934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.449942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.449948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.451615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.459944] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.460476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.460902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.460932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.460953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.461347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.461681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.461705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.461726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.463824] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.471907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.486 [2024-07-23 14:11:10.472437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.472912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.486 [2024-07-23 14:11:10.472948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.486 [2024-07-23 14:11:10.472959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.486 [2024-07-23 14:11:10.473049] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.486 [2024-07-23 14:11:10.473197] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.486 [2024-07-23 14:11:10.473205] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.486 [2024-07-23 14:11:10.473212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.486 [2024-07-23 14:11:10.474945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.486 [2024-07-23 14:11:10.483849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.487 [2024-07-23 14:11:10.484339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.487 [2024-07-23 14:11:10.484695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.487 [2024-07-23 14:11:10.484705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.487 [2024-07-23 14:11:10.484712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.487 [2024-07-23 14:11:10.484825] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.487 [2024-07-23 14:11:10.484909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.487 [2024-07-23 14:11:10.484917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.487 [2024-07-23 14:11:10.484923] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.487 [2024-07-23 14:11:10.486811] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.487 [2024-07-23 14:11:10.495654] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.487 [2024-07-23 14:11:10.496228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.487 [2024-07-23 14:11:10.496626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.487 [2024-07-23 14:11:10.496657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.487 [2024-07-23 14:11:10.496678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.487 [2024-07-23 14:11:10.496875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.487 [2024-07-23 14:11:10.496961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.487 [2024-07-23 14:11:10.496969] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.487 [2024-07-23 14:11:10.496975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.487 [2024-07-23 14:11:10.498830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.750 [2024-07-23 14:11:10.507620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.750 [2024-07-23 14:11:10.508207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.750 [2024-07-23 14:11:10.508620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.750 [2024-07-23 14:11:10.508650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.750 [2024-07-23 14:11:10.508671] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.750 [2024-07-23 14:11:10.509108] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.750 [2024-07-23 14:11:10.509226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.750 [2024-07-23 14:11:10.509234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.750 [2024-07-23 14:11:10.509240] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.750 [2024-07-23 14:11:10.511063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.750 [2024-07-23 14:11:10.519551] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.750 [2024-07-23 14:11:10.520063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.750 [2024-07-23 14:11:10.520535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.750 [2024-07-23 14:11:10.520565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.750 [2024-07-23 14:11:10.520587] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.750 [2024-07-23 14:11:10.520966] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.750 [2024-07-23 14:11:10.521237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.521245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.521252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.751 [2024-07-23 14:11:10.523025] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.751 [2024-07-23 14:11:10.531380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.751 [2024-07-23 14:11:10.531860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.532334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.532366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.751 [2024-07-23 14:11:10.532387] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.751 [2024-07-23 14:11:10.532818] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.751 [2024-07-23 14:11:10.533080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.533088] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.533094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.751 [2024-07-23 14:11:10.535002] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.751 [2024-07-23 14:11:10.543469] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.751 [2024-07-23 14:11:10.544055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.544318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.544348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.751 [2024-07-23 14:11:10.544369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.751 [2024-07-23 14:11:10.544749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.751 [2024-07-23 14:11:10.545102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.545128] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.545149] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.751 [2024-07-23 14:11:10.546980] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.751 [2024-07-23 14:11:10.555394] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.751 [2024-07-23 14:11:10.555975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.556443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.556476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.751 [2024-07-23 14:11:10.556497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.751 [2024-07-23 14:11:10.556879] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.751 [2024-07-23 14:11:10.557150] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.557159] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.557166] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.751 [2024-07-23 14:11:10.559198] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.751 [2024-07-23 14:11:10.567496] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.751 [2024-07-23 14:11:10.568055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.568532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.568563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.751 [2024-07-23 14:11:10.568584] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.751 [2024-07-23 14:11:10.568770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.751 [2024-07-23 14:11:10.568916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.568928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.568937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.751 [2024-07-23 14:11:10.571382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.751 [2024-07-23 14:11:10.579717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.751 [2024-07-23 14:11:10.580288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.580652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.580683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.751 [2024-07-23 14:11:10.580705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.751 [2024-07-23 14:11:10.580962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.751 [2024-07-23 14:11:10.581082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.581093] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.581099] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.751 [2024-07-23 14:11:10.583031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.751 [2024-07-23 14:11:10.591733] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.751 [2024-07-23 14:11:10.592309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.592653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.751 [2024-07-23 14:11:10.592684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.751 [2024-07-23 14:11:10.592705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.751 [2024-07-23 14:11:10.593178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.751 [2024-07-23 14:11:10.593265] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.751 [2024-07-23 14:11:10.593273] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.751 [2024-07-23 14:11:10.593279] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.594879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.603590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.604176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.604655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.604686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.604708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.605103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.605237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.752 [2024-07-23 14:11:10.605245] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.752 [2024-07-23 14:11:10.605251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.606996] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.615613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.616206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.616677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.616710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.616732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.617040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.617192] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.752 [2024-07-23 14:11:10.617200] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.752 [2024-07-23 14:11:10.617210] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.619035] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.627417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.627931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.628275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.628309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.628331] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.628663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.628895] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.752 [2024-07-23 14:11:10.628919] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.752 [2024-07-23 14:11:10.628939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.631049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.640174] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.640758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.641151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.641162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.641169] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.641283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.641396] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.752 [2024-07-23 14:11:10.641404] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.752 [2024-07-23 14:11:10.641410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.643224] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.651928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.652470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.652873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.652904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.652926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.653369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.653601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.752 [2024-07-23 14:11:10.653608] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.752 [2024-07-23 14:11:10.653615] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.655303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.663750] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.664263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.664699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.664729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.664750] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.665188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.665302] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.752 [2024-07-23 14:11:10.665309] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.752 [2024-07-23 14:11:10.665316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.752 [2024-07-23 14:11:10.666978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.752 [2024-07-23 14:11:10.675387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.752 [2024-07-23 14:11:10.675929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.676387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.752 [2024-07-23 14:11:10.676419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.752 [2024-07-23 14:11:10.676441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.752 [2024-07-23 14:11:10.676872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.752 [2024-07-23 14:11:10.677264] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.677290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.677310] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.679056] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.687341] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.687902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.688382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.688414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.688448] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.753 [2024-07-23 14:11:10.688577] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.753 [2024-07-23 14:11:10.688690] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.688698] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.688704] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.690433] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.699067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.699643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.700121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.700154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.700176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.753 [2024-07-23 14:11:10.700357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.753 [2024-07-23 14:11:10.700567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.700578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.700586] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.703090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.711307] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.711878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.712313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.712360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.712382] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.753 [2024-07-23 14:11:10.712861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.753 [2024-07-23 14:11:10.713252] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.713277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.713297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.715173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.723134] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.723467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.723906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.723936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.723957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.753 [2024-07-23 14:11:10.724301] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.753 [2024-07-23 14:11:10.724430] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.724437] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.724444] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.726178] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.734974] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.735589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.736083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.736115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.736137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.753 [2024-07-23 14:11:10.736591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.753 [2024-07-23 14:11:10.736748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.736756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.736762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.738306] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.746830] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.747393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.747793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.747823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.747845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.753 [2024-07-23 14:11:10.748136] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.753 [2024-07-23 14:11:10.748470] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.753 [2024-07-23 14:11:10.748494] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.753 [2024-07-23 14:11:10.748515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.753 [2024-07-23 14:11:10.750318] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:19.753 [2024-07-23 14:11:10.758744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:19.753 [2024-07-23 14:11:10.759294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.759666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:19.753 [2024-07-23 14:11:10.759695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:19.753 [2024-07-23 14:11:10.759717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:19.754 [2024-07-23 14:11:10.760261] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:19.754 [2024-07-23 14:11:10.760552] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:19.754 [2024-07-23 14:11:10.760576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:19.754 [2024-07-23 14:11:10.760583] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:19.754 [2024-07-23 14:11:10.763029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.065 [2024-07-23 14:11:10.771747] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.065 [2024-07-23 14:11:10.772293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.772673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.772682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.065 [2024-07-23 14:11:10.772692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.065 [2024-07-23 14:11:10.772839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.065 [2024-07-23 14:11:10.772957] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.065 [2024-07-23 14:11:10.772964] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.065 [2024-07-23 14:11:10.772971] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.065 [2024-07-23 14:11:10.774720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.065 [2024-07-23 14:11:10.783907] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.065 [2024-07-23 14:11:10.784465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.784908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.784918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.065 [2024-07-23 14:11:10.784925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.065 [2024-07-23 14:11:10.785065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.065 [2024-07-23 14:11:10.785183] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.065 [2024-07-23 14:11:10.785191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.065 [2024-07-23 14:11:10.785198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.065 [2024-07-23 14:11:10.786926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.065 [2024-07-23 14:11:10.795895] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.065 [2024-07-23 14:11:10.796432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.796907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.796937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.065 [2024-07-23 14:11:10.796959] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.065 [2024-07-23 14:11:10.797350] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.065 [2024-07-23 14:11:10.797520] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.065 [2024-07-23 14:11:10.797528] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.065 [2024-07-23 14:11:10.797534] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.065 [2024-07-23 14:11:10.799287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.065 [2024-07-23 14:11:10.807702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.065 [2024-07-23 14:11:10.808272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.808569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.808600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.065 [2024-07-23 14:11:10.808621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.065 [2024-07-23 14:11:10.808775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.065 [2024-07-23 14:11:10.808904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.065 [2024-07-23 14:11:10.808911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.065 [2024-07-23 14:11:10.808917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.065 [2024-07-23 14:11:10.810626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.065 [2024-07-23 14:11:10.819452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.065 [2024-07-23 14:11:10.819822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.820161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.820193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.065 [2024-07-23 14:11:10.820216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.065 [2024-07-23 14:11:10.820696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.065 [2024-07-23 14:11:10.820990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.065 [2024-07-23 14:11:10.820997] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.065 [2024-07-23 14:11:10.821002] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.065 [2024-07-23 14:11:10.822900] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.065 [2024-07-23 14:11:10.831368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.065 [2024-07-23 14:11:10.831944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.065 [2024-07-23 14:11:10.832424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.832457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.832478] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.832842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.832941] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.832948] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.832954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.834685] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.843184] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.843740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.844145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.844178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.844199] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.844529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.844795] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.844803] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.844809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.846547] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.855040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.855467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.855949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.855979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.856000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.856497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.856612] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.856620] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.856626] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.858350] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.866832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.867334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.867809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.867849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.867856] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.867949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.868107] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.868115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.868122] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.869850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.878613] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.879155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.879629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.879659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.879681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.880075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.880253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.880264] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.880270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.882005] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.890361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.890938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.891370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.891402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.891424] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.891720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.891849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.891856] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.891862] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.893542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.902159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.902734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.903210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.903242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.903263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.903594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.903885] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.903893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.903900] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.905529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.913905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.914457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.914943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.914974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.914995] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.915281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.915380] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.915388] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.066 [2024-07-23 14:11:10.915397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.066 [2024-07-23 14:11:10.917373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.066 [2024-07-23 14:11:10.925889] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.066 [2024-07-23 14:11:10.926516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.926773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.066 [2024-07-23 14:11:10.926804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.066 [2024-07-23 14:11:10.926826] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.066 [2024-07-23 14:11:10.927322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.066 [2024-07-23 14:11:10.927706] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.066 [2024-07-23 14:11:10.927731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.927752] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:10.929695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:10.937803] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:10.938384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.938833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.938863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:10.938886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:10.939282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:10.939766] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:10.939790] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.939810] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:10.941879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:10.949870] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:10.950445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.950898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.950929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:10.950950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:10.951121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:10.951266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:10.951274] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.951280] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:10.952976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:10.961641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:10.962223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.962654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.962686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:10.962708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:10.962915] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:10.962995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:10.963002] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.963008] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:10.965742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:10.973953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:10.974529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.975005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.975036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:10.975073] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:10.975385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:10.975499] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:10.975507] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.975513] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:10.977293] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:10.985804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:10.986378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.986851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.986882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:10.986904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:10.987397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:10.987680] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:10.987705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.987726] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:10.989818] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:10.997639] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:10.998201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.998651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:10.998682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:10.998703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:10.998965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:10.999095] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:10.999103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:10.999110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:11.000917] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:11.009467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:11.010012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:11.010423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:11.010455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:11.010477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:11.010806] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:11.011053] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:11.011061] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:11.011067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:11.012781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:11.021167] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:11.021687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:11.022013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:11.022057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:11.022081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:11.022333] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:11.022447] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:11.022454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:11.022460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:11.024244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.067 [2024-07-23 14:11:11.033046] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.067 [2024-07-23 14:11:11.033636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:11.034119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.067 [2024-07-23 14:11:11.034152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.067 [2024-07-23 14:11:11.034174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.067 [2024-07-23 14:11:11.034444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.067 [2024-07-23 14:11:11.034590] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.067 [2024-07-23 14:11:11.034601] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.067 [2024-07-23 14:11:11.034610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.067 [2024-07-23 14:11:11.037079] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.068 [2024-07-23 14:11:11.045655] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.068 [2024-07-23 14:11:11.046230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-23 14:11:11.046675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-23 14:11:11.046705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.068 [2024-07-23 14:11:11.046727] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.068 [2024-07-23 14:11:11.047008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.068 [2024-07-23 14:11:11.047216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.068 [2024-07-23 14:11:11.047225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.068 [2024-07-23 14:11:11.047231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.068 [2024-07-23 14:11:11.048930] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.068 [2024-07-23 14:11:11.057669] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.068 [2024-07-23 14:11:11.058267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-23 14:11:11.058703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-23 14:11:11.058713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.068 [2024-07-23 14:11:11.058721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.068 [2024-07-23 14:11:11.058792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.068 [2024-07-23 14:11:11.058925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.068 [2024-07-23 14:11:11.058933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.068 [2024-07-23 14:11:11.058940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.068 [2024-07-23 14:11:11.060810] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.068 [2024-07-23 14:11:11.069676] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.068 [2024-07-23 14:11:11.070288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-23 14:11:11.070599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.068 [2024-07-23 14:11:11.070613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.068 [2024-07-23 14:11:11.070621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.068 [2024-07-23 14:11:11.070768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.068 [2024-07-23 14:11:11.070871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.068 [2024-07-23 14:11:11.070879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.068 [2024-07-23 14:11:11.070885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.072709] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.081674] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.082261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.082629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.082659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.082681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.329 [2024-07-23 14:11:11.083122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.329 [2024-07-23 14:11:11.083483] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.329 [2024-07-23 14:11:11.083492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.329 [2024-07-23 14:11:11.083498] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.085095] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.093559] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.094082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.094537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.094568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.094596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.329 [2024-07-23 14:11:11.094704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.329 [2024-07-23 14:11:11.094841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.329 [2024-07-23 14:11:11.094848] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.329 [2024-07-23 14:11:11.094854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.096427] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.105548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.106138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.106512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.106522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.106531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.329 [2024-07-23 14:11:11.106689] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.329 [2024-07-23 14:11:11.106774] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.329 [2024-07-23 14:11:11.106781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.329 [2024-07-23 14:11:11.106787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.108663] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.117358] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.117909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.118186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.118218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.118240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.329 [2024-07-23 14:11:11.118395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.329 [2024-07-23 14:11:11.118524] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.329 [2024-07-23 14:11:11.118532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.329 [2024-07-23 14:11:11.118538] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.120345] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.129405] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.129960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.130396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.130406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.130413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.329 [2024-07-23 14:11:11.130520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.329 [2024-07-23 14:11:11.130627] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.329 [2024-07-23 14:11:11.130635] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.329 [2024-07-23 14:11:11.130641] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.132340] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.141365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.141900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.142384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.142417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.142439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.329 [2024-07-23 14:11:11.142926] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.329 [2024-07-23 14:11:11.143365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.329 [2024-07-23 14:11:11.143391] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.329 [2024-07-23 14:11:11.143411] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.329 [2024-07-23 14:11:11.145276] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.329 [2024-07-23 14:11:11.153321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.329 [2024-07-23 14:11:11.153882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.154312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.329 [2024-07-23 14:11:11.154345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.329 [2024-07-23 14:11:11.154366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.154445] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.154581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.154588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.154594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.156147] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.165058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.165606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.166034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.166080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.166101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.166532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.166758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.166766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.166772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.168416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.177053] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.177518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.177931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.177961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.177984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.178390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.178522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.178529] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.178535] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.180280] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.189121] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.189682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.190186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.190219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.190240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.190621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.190842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.190850] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.190856] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.192587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.200985] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.201555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.202065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.202097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.202119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.202475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.202603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.202611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.202617] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.204321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.212829] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.213354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.213857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.213888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.213910] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.214256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.214688] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.214719] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.214740] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.216534] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.224928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.225446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.225830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.225841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.225848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.225949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.226086] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.226094] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.226100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.228021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.236853] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.237364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.237724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.237734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.237741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.237843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.237945] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.237952] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.237959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.239734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.248778] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.249357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.249737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.249748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.249755] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.249872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.250036] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.250049] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.330 [2024-07-23 14:11:11.250059] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.330 [2024-07-23 14:11:11.251883] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.330 [2024-07-23 14:11:11.260816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.330 [2024-07-23 14:11:11.261319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.261745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.330 [2024-07-23 14:11:11.261756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.330 [2024-07-23 14:11:11.261763] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.330 [2024-07-23 14:11:11.261895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.330 [2024-07-23 14:11:11.261982] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.330 [2024-07-23 14:11:11.261990] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.261997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.263923] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.331 [2024-07-23 14:11:11.272899] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.331 [2024-07-23 14:11:11.273400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.273716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.273725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.331 [2024-07-23 14:11:11.273733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.331 [2024-07-23 14:11:11.273865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.331 [2024-07-23 14:11:11.273983] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.331 [2024-07-23 14:11:11.273991] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.273997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.275795] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.331 [2024-07-23 14:11:11.284851] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.331 [2024-07-23 14:11:11.285537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.285987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.286018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.331 [2024-07-23 14:11:11.286040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.331 [2024-07-23 14:11:11.286482] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.331 [2024-07-23 14:11:11.286850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.331 [2024-07-23 14:11:11.286859] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.286865] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.289391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.331 [2024-07-23 14:11:11.297474] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.331 [2024-07-23 14:11:11.298059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.298469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.298500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.331 [2024-07-23 14:11:11.298521] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.331 [2024-07-23 14:11:11.298913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.331 [2024-07-23 14:11:11.299067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.331 [2024-07-23 14:11:11.299075] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.299081] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.300857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.331 [2024-07-23 14:11:11.309536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.331 [2024-07-23 14:11:11.310070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.310567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.310598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.331 [2024-07-23 14:11:11.310619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.331 [2024-07-23 14:11:11.310750] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.331 [2024-07-23 14:11:11.310864] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.331 [2024-07-23 14:11:11.310872] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.310878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.312640] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.331 [2024-07-23 14:11:11.321521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.331 [2024-07-23 14:11:11.322102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.322510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.322540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.331 [2024-07-23 14:11:11.322562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.331 [2024-07-23 14:11:11.322843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.331 [2024-07-23 14:11:11.323176] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.331 [2024-07-23 14:11:11.323185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.323191] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.324974] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.331 [2024-07-23 14:11:11.333348] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.331 [2024-07-23 14:11:11.333890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.334359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.331 [2024-07-23 14:11:11.334391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.331 [2024-07-23 14:11:11.334412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.331 [2024-07-23 14:11:11.334736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.331 [2024-07-23 14:11:11.334815] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.331 [2024-07-23 14:11:11.334822] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.331 [2024-07-23 14:11:11.334828] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.331 [2024-07-23 14:11:11.336545] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.593 [2024-07-23 14:11:11.345286] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.593 [2024-07-23 14:11:11.345786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.346192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.346224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.593 [2024-07-23 14:11:11.346248] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.593 [2024-07-23 14:11:11.346362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.593 [2024-07-23 14:11:11.346492] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.593 [2024-07-23 14:11:11.346500] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.593 [2024-07-23 14:11:11.346506] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.593 [2024-07-23 14:11:11.348355] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.593 [2024-07-23 14:11:11.357343] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.593 [2024-07-23 14:11:11.357744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.358163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.358196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.593 [2024-07-23 14:11:11.358218] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.593 [2024-07-23 14:11:11.358598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.593 [2024-07-23 14:11:11.358880] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.593 [2024-07-23 14:11:11.358905] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.593 [2024-07-23 14:11:11.358925] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.593 [2024-07-23 14:11:11.360896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.593 [2024-07-23 14:11:11.369102] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.593 [2024-07-23 14:11:11.369517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.369983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.370026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.593 [2024-07-23 14:11:11.370033] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.593 [2024-07-23 14:11:11.370122] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.593 [2024-07-23 14:11:11.370207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.593 [2024-07-23 14:11:11.370215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.593 [2024-07-23 14:11:11.370221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.593 [2024-07-23 14:11:11.371989] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.593 [2024-07-23 14:11:11.380898] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.593 [2024-07-23 14:11:11.381449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.381823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.381854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.593 [2024-07-23 14:11:11.381876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.593 [2024-07-23 14:11:11.382217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.593 [2024-07-23 14:11:11.382346] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.593 [2024-07-23 14:11:11.382353] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.593 [2024-07-23 14:11:11.382359] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.593 [2024-07-23 14:11:11.384037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.593 [2024-07-23 14:11:11.392766] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.593 [2024-07-23 14:11:11.393365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.393797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.393827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.593 [2024-07-23 14:11:11.393849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.593 [2024-07-23 14:11:11.394011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.593 [2024-07-23 14:11:11.394114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.593 [2024-07-23 14:11:11.394122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.593 [2024-07-23 14:11:11.394129] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.593 [2024-07-23 14:11:11.395880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.593 [2024-07-23 14:11:11.404818] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.593 [2024-07-23 14:11:11.405363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.405820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.593 [2024-07-23 14:11:11.405851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.593 [2024-07-23 14:11:11.405880] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.406322] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.406458] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.406465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.406471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.408196] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.416712] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.417282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.417761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.417793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.417814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.418156] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.418415] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.418423] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.418428] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.420742] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.429418] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.429984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.430359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.430392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.430413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.430844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.431269] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.431277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.431283] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.432941] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.441194] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.441686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.442097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.442130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.442152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.442591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.442922] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.442946] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.442966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.445159] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.452965] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.453425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.453836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.453867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.453889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.454282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.454507] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.454515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.454521] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.456180] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.464871] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.465408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.465719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.465729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.465736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.465865] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.466037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.466051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.466058] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.467719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.476838] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.477393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.477743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.477754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.477761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.477831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.477933] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.477941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.477947] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.479555] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.488730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.489151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.489574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.489605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.489626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.490007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.490282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.490290] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.490296] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.492006] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 [2024-07-23 14:11:11.500586] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 [2024-07-23 14:11:11.501170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.501529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.594 [2024-07-23 14:11:11.501559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.594 [2024-07-23 14:11:11.501581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.594 [2024-07-23 14:11:11.501912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.594 [2024-07-23 14:11:11.502255] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.594 [2024-07-23 14:11:11.502280] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.594 [2024-07-23 14:11:11.502301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.594 [2024-07-23 14:11:11.504344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3429591 Killed "${NVMF_APP[@]}" "$@" 00:29:20.594 14:11:11 -- host/bdevperf.sh@36 -- # tgt_init 00:29:20.594 14:11:11 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:20.594 14:11:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:20.594 14:11:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:20.594 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:29:20.594 [2024-07-23 14:11:11.512722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.594 14:11:11 -- nvmf/common.sh@469 -- # nvmfpid=3431034 00:29:20.595 14:11:11 -- nvmf/common.sh@470 -- # waitforlisten 3431034 00:29:20.595 [2024-07-23 14:11:11.513458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 14:11:11 -- common/autotest_common.sh@819 -- # '[' -z 3431034 ']' 00:29:20.595 14:11:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:20.595 14:11:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:20.595 14:11:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:20.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:20.595 [2024-07-23 14:11:11.513793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.513813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.513825] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 14:11:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:20.595 [2024-07-23 14:11:11.513982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 14:11:11 -- common/autotest_common.sh@10 -- # set +x 00:29:20.595 [2024-07-23 14:11:11.514091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.514115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.514127] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 14:11:11 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:20.595 [2024-07-23 14:11:11.515908] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 [2024-07-23 14:11:11.524785] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.525340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.525656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.525667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.525675] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.525778] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 [2024-07-23 14:11:11.525926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.525934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.525941] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 [2024-07-23 14:11:11.527890] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 [2024-07-23 14:11:11.536710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.537287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.537603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.537614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.537621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.537708] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 [2024-07-23 14:11:11.537841] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.537849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.537861] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 [2024-07-23 14:11:11.539419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 [2024-07-23 14:11:11.548825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.549350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.549709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.549721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.549728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.549830] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 [2024-07-23 14:11:11.549918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.549926] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.549932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 [2024-07-23 14:11:11.551739] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 [2024-07-23 14:11:11.557502] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:20.595 [2024-07-23 14:11:11.557540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.595 [2024-07-23 14:11:11.560908] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.561445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.561761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.561771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.561779] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.561882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 [2024-07-23 14:11:11.562015] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.562023] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.562030] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 [2024-07-23 14:11:11.563842] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 [2024-07-23 14:11:11.572820] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.573339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.573656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.573666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.573674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.573792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 [2024-07-23 14:11:11.573909] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.573921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.573928] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 [2024-07-23 14:11:11.575859] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.595 [2024-07-23 14:11:11.584816] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.585320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.585628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.585639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.585646] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.585760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.595 [2024-07-23 14:11:11.585875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.595 [2024-07-23 14:11:11.585883] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.595 [2024-07-23 14:11:11.585889] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.595 [2024-07-23 14:11:11.587950] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.595 [2024-07-23 14:11:11.596789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.595 [2024-07-23 14:11:11.597350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.597662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.595 [2024-07-23 14:11:11.597673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.595 [2024-07-23 14:11:11.597680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.595 [2024-07-23 14:11:11.597783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.596 [2024-07-23 14:11:11.597900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.596 [2024-07-23 14:11:11.597908] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.596 [2024-07-23 14:11:11.597915] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.596 [2024-07-23 14:11:11.599755] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.857 [2024-07-23 14:11:11.608844] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.857 [2024-07-23 14:11:11.609447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.609793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.609803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.857 [2024-07-23 14:11:11.609811] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.857 [2024-07-23 14:11:11.609943] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.857 [2024-07-23 14:11:11.610094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.857 [2024-07-23 14:11:11.610105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.857 [2024-07-23 14:11:11.610112] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.857 [2024-07-23 14:11:11.611785] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.857 [2024-07-23 14:11:11.615240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:20.857 [2024-07-23 14:11:11.620781] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.857 [2024-07-23 14:11:11.621322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.621681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.621691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.857 [2024-07-23 14:11:11.621699] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.857 [2024-07-23 14:11:11.621843] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.857 [2024-07-23 14:11:11.621972] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.857 [2024-07-23 14:11:11.621981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.857 [2024-07-23 14:11:11.621987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.857 [2024-07-23 14:11:11.623773] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.857 [2024-07-23 14:11:11.632633] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.857 [2024-07-23 14:11:11.633114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.633481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.633491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.857 [2024-07-23 14:11:11.633499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.857 [2024-07-23 14:11:11.633613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.857 [2024-07-23 14:11:11.633758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.857 [2024-07-23 14:11:11.633766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.857 [2024-07-23 14:11:11.633772] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.857 [2024-07-23 14:11:11.635425] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.857 [2024-07-23 14:11:11.644544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.857 [2024-07-23 14:11:11.645105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.645417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.857 [2024-07-23 14:11:11.645427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.857 [2024-07-23 14:11:11.645434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.857 [2024-07-23 14:11:11.645533] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.857 [2024-07-23 14:11:11.645633] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.857 [2024-07-23 14:11:11.645641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.645651] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.647451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.656570] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.657169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.657485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.657496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.657504] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.657622] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.657742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.657751] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.657758] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.659732] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.668587] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.669200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.669559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.669569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.669577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.669695] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.669827] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.669835] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.669842] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.671618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.680645] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.681196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.681581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.681592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.681601] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.681704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.681839] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.681847] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.681854] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.683609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.692498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.693053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.693437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.693447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.693455] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.693570] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.693654] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.693662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.693669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.693888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.858 [2024-07-23 14:11:11.693985] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.858 [2024-07-23 14:11:11.693993] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.858 [2024-07-23 14:11:11.693999] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.858 [2024-07-23 14:11:11.694098] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:20.858 [2024-07-23 14:11:11.694123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:20.858 [2024-07-23 14:11:11.694124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:20.858 [2024-07-23 14:11:11.695348] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.704656] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.705264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.705717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.705729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.705738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.705842] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.705929] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.705937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.705944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.707679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.716648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.717262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.717674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.717685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.717693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.717803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.717936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.717944] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.717951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.719789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.728646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.729209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.729669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.729680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.729689] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.729792] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.729911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.729920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.729927] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.731857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.740738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.741298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.741752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.741763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.858 [2024-07-23 14:11:11.741772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.858 [2024-07-23 14:11:11.741876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.858 [2024-07-23 14:11:11.741978] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.858 [2024-07-23 14:11:11.741987] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.858 [2024-07-23 14:11:11.741994] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.858 [2024-07-23 14:11:11.743903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.858 [2024-07-23 14:11:11.752730] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.858 [2024-07-23 14:11:11.753308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.858 [2024-07-23 14:11:11.753763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.753773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.753782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.753901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.754038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.754050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.754057] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.755919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.764873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.765425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.765860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.765870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.765878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.765995] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.766115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.766124] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.766131] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.767770] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.777061] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.777598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.778046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.778056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.778064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.778165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.778283] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.778291] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.778297] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.779891] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.788970] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.789539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.789999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.790009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.790016] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.790123] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.790239] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.790251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.790257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.792121] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.800840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.801353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.801790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.801801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.801808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.801942] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.802061] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.802070] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.802076] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.803968] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.812828] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.813314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.813754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.813765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.813772] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.813874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.814021] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.814029] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.814035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.815919] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.824761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.825262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.825643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.825654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.825661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.825794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.825926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.825934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.825943] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.827764] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.836789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.837325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.837778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.837789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.837796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.837898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.838048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.838056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.838063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.839848] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.848751] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.849287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.849713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.849723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.849730] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.849847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.859 [2024-07-23 14:11:11.849949] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.859 [2024-07-23 14:11:11.849957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.859 [2024-07-23 14:11:11.849963] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.859 [2024-07-23 14:11:11.851760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:20.859 [2024-07-23 14:11:11.860807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:20.859 [2024-07-23 14:11:11.861361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.861787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:20.859 [2024-07-23 14:11:11.861798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:20.859 [2024-07-23 14:11:11.861805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:20.859 [2024-07-23 14:11:11.861938] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:20.860 [2024-07-23 14:11:11.862023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:20.860 [2024-07-23 14:11:11.862032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:20.860 [2024-07-23 14:11:11.862038] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:20.860 [2024-07-23 14:11:11.863738] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.872891] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.873455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.873889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.873899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.873906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.874008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.874113] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.874121] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.121 [2024-07-23 14:11:11.874128] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.121 [2024-07-23 14:11:11.875944] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.885036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.885615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.886051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.886062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.886069] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.886217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.886365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.886373] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.121 [2024-07-23 14:11:11.886380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.121 [2024-07-23 14:11:11.888153] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.897050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.897585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.898034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.898048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.898056] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.898188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.898277] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.898285] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.121 [2024-07-23 14:11:11.898292] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.121 [2024-07-23 14:11:11.900068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.908897] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.909459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.909887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.909897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.909904] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.910006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.910096] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.910104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.121 [2024-07-23 14:11:11.910110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.121 [2024-07-23 14:11:11.911945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.921067] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.921612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.922060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.922072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.922079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.922227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.922361] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.922369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.121 [2024-07-23 14:11:11.922376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.121 [2024-07-23 14:11:11.924408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.933387] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.933976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.934401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.934412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.934419] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.934551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.934638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.934646] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.121 [2024-07-23 14:11:11.934652] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.121 [2024-07-23 14:11:11.936582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.121 [2024-07-23 14:11:11.945518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.121 [2024-07-23 14:11:11.946061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.946493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.121 [2024-07-23 14:11:11.946504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.121 [2024-07-23 14:11:11.946511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.121 [2024-07-23 14:11:11.946597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.121 [2024-07-23 14:11:11.946699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.121 [2024-07-23 14:11:11.946707] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:11.946713] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:11.948553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:11.957470] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:11.958035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.958472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.958483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:11.958490] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:11.958592] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:11.958693] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:11.958700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:11.958707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:11.960466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:11.969400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:11.969970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.970388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.970400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:11.970407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:11.970508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:11.970595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:11.970602] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:11.970609] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:11.972473] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:11.981378] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:11.981936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.982370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.982384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:11.982391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:11.982524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:11.982671] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:11.982679] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:11.982686] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:11.984477] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:11.993334] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:11.993867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.994316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:11.994327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:11.994334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:11.994421] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:11.994553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:11.994561] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:11.994568] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:11.996389] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:12.005353] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:12.005896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.006349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.006361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:12.006368] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:12.006515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:12.006666] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:12.006674] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:12.006681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:12.008404] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:12.017449] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:12.018041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.018498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.018509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:12.018525] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:12.018673] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:12.018806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:12.018814] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:12.018821] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:12.020688] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:12.029497] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:12.030099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.030455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.030466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:12.030473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:12.030606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:12.030692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:12.030700] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:12.030707] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:12.032618] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:12.041498] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:12.042125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.042434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.042444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:12.042451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:12.042568] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.122 [2024-07-23 14:11:12.042685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.122 [2024-07-23 14:11:12.042694] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.122 [2024-07-23 14:11:12.042700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.122 [2024-07-23 14:11:12.044462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.122 [2024-07-23 14:11:12.053541] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.122 [2024-07-23 14:11:12.054139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.054499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.122 [2024-07-23 14:11:12.054510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.122 [2024-07-23 14:11:12.054517] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.122 [2024-07-23 14:11:12.054668] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.054772] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.054781] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.054787] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.056609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.123 [2024-07-23 14:11:12.065458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.123 [2024-07-23 14:11:12.066047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.066480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.066491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.123 [2024-07-23 14:11:12.066498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.123 [2024-07-23 14:11:12.066600] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.066686] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.066693] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.066700] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.068553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.123 [2024-07-23 14:11:12.077432] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.123 [2024-07-23 14:11:12.078029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.078346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.078356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.123 [2024-07-23 14:11:12.078364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.123 [2024-07-23 14:11:12.078466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.078598] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.078607] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.078613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.080450] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.123 [2024-07-23 14:11:12.089540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.123 [2024-07-23 14:11:12.090118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.090477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.090487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.123 [2024-07-23 14:11:12.090495] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.123 [2024-07-23 14:11:12.090582] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.090749] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.090758] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.090764] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.092391] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.123 [2024-07-23 14:11:12.101592] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.123 [2024-07-23 14:11:12.102147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.102505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.102515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.123 [2024-07-23 14:11:12.102522] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.123 [2024-07-23 14:11:12.102640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.102742] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.102750] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.102756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.104623] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.123 [2024-07-23 14:11:12.113684] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.123 [2024-07-23 14:11:12.114253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.114683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.114694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.123 [2024-07-23 14:11:12.114701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.123 [2024-07-23 14:11:12.114803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.114919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.114927] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.114934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.116668] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.123 [2024-07-23 14:11:12.125864] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.123 [2024-07-23 14:11:12.126440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.126869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.123 [2024-07-23 14:11:12.126880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.123 [2024-07-23 14:11:12.126887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.123 [2024-07-23 14:11:12.127019] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.123 [2024-07-23 14:11:12.127139] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.123 [2024-07-23 14:11:12.127150] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.123 [2024-07-23 14:11:12.127157] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.123 [2024-07-23 14:11:12.128978] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.385 [2024-07-23 14:11:12.137760] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.385 [2024-07-23 14:11:12.138260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.138670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.138681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.385 [2024-07-23 14:11:12.138688] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.385 [2024-07-23 14:11:12.138775] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.385 [2024-07-23 14:11:12.138907] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.385 [2024-07-23 14:11:12.138915] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.385 [2024-07-23 14:11:12.138921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.385 [2024-07-23 14:11:12.140967] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.385 [2024-07-23 14:11:12.149748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.385 [2024-07-23 14:11:12.150247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.150677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.150688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.385 [2024-07-23 14:11:12.150695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.385 [2024-07-23 14:11:12.150781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.385 [2024-07-23 14:11:12.150898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.385 [2024-07-23 14:11:12.150907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.385 [2024-07-23 14:11:12.150913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.385 [2024-07-23 14:11:12.152893] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.385 [2024-07-23 14:11:12.161744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.385 [2024-07-23 14:11:12.162280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.162646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.162657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.385 [2024-07-23 14:11:12.162664] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.385 [2024-07-23 14:11:12.162781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.385 [2024-07-23 14:11:12.162898] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.385 [2024-07-23 14:11:12.162907] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.385 [2024-07-23 14:11:12.162917] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.385 [2024-07-23 14:11:12.164635] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.385 [2024-07-23 14:11:12.173857] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.385 [2024-07-23 14:11:12.174191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.174597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.174607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.385 [2024-07-23 14:11:12.174613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.385 [2024-07-23 14:11:12.174761] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.385 [2024-07-23 14:11:12.174862] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.385 [2024-07-23 14:11:12.174870] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.385 [2024-07-23 14:11:12.174878] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.385 [2024-07-23 14:11:12.176717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.385 [2024-07-23 14:11:12.185772] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.385 [2024-07-23 14:11:12.186145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.186538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.186550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.385 [2024-07-23 14:11:12.186557] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.385 [2024-07-23 14:11:12.186676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.385 [2024-07-23 14:11:12.186777] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.385 [2024-07-23 14:11:12.186786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.385 [2024-07-23 14:11:12.186793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.385 [2024-07-23 14:11:12.188692] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.385 [2024-07-23 14:11:12.197642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.385 [2024-07-23 14:11:12.198215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.198578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.385 [2024-07-23 14:11:12.198588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.385 [2024-07-23 14:11:12.198596] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.385 [2024-07-23 14:11:12.198683] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.385 [2024-07-23 14:11:12.198814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.385 [2024-07-23 14:11:12.198823] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.385 [2024-07-23 14:11:12.198830] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.385 [2024-07-23 14:11:12.200582] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.209648] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.210193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.210558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.210569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.210577] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.210680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.210797] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.210806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.210812] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.212683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.221571] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.222153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.222458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.222469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.222477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.222609] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.222757] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.222766] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.222773] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.224581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.233591] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.234193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.234621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.234632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.234639] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.234757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.234859] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.234867] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.234873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.236695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.245881] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.246425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.246856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.246867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.246874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.246990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.247126] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.247135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.247141] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.248945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.257922] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.258504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.258911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.258921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.258928] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.259066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.259182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.259191] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.259197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.260971] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.269928] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.270502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.270931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.270942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.270951] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.271086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.271173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.271182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.271189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.273023] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.281901] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.282471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.282829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.282840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.282847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.282965] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.283100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.386 [2024-07-23 14:11:12.283110] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.386 [2024-07-23 14:11:12.283117] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.386 [2024-07-23 14:11:12.284903] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.386 [2024-07-23 14:11:12.293888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.386 [2024-07-23 14:11:12.294449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.294810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.386 [2024-07-23 14:11:12.294821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.386 [2024-07-23 14:11:12.294829] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.386 [2024-07-23 14:11:12.294977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.386 [2024-07-23 14:11:12.295098] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.295107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.295114] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.296976] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 [2024-07-23 14:11:12.306006] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.306566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.306997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.307009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.307017] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.307138] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.307241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.307250] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.307256] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.309027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 [2024-07-23 14:11:12.317876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.318363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.318743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.318754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.318764] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.318897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.319048] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.319057] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.319063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.320896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 [2024-07-23 14:11:12.329918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.330530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.330915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.330925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.330932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.331068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.331170] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.331178] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.331184] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.333068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 [2024-07-23 14:11:12.342187] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.342794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.343024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.343058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.343071] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.343244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.343365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.343379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.343389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.345576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 14:11:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:21.387 14:11:12 -- common/autotest_common.sh@852 -- # return 0 00:29:21.387 14:11:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:21.387 14:11:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:21.387 [2024-07-23 14:11:12.354437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:29:21.387 [2024-07-23 14:11:12.354969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.355415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.355427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.355435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.355553] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.355701] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.355710] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.355717] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.357617] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 [2024-07-23 14:11:12.366536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.367020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.367394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.367405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.367413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.367546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.367662] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.367670] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.367677] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.369593] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 [2024-07-23 14:11:12.378493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.379084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.379450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.379461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.379468] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.379617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.379735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.379743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.387 [2024-07-23 14:11:12.379749] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.387 [2024-07-23 14:11:12.381576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.387 14:11:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:21.387 14:11:12 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:21.387 14:11:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.387 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:29:21.387 [2024-07-23 14:11:12.390481] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.387 [2024-07-23 14:11:12.391047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.391355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.387 [2024-07-23 14:11:12.391366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.387 [2024-07-23 14:11:12.391373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.387 [2024-07-23 14:11:12.391476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.387 [2024-07-23 14:11:12.391592] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.387 [2024-07-23 14:11:12.391600] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.388 [2024-07-23 14:11:12.391607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.388 [2024-07-23 14:11:12.393279] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.388 [2024-07-23 14:11:12.395126] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:21.388 14:11:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.648 14:11:12 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:21.648 14:11:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.648 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:29:21.648 [2024-07-23 14:11:12.402452] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.648 [2024-07-23 14:11:12.402974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.403324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.403335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.648 [2024-07-23 14:11:12.403342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.648 [2024-07-23 14:11:12.403474] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.648 [2024-07-23 14:11:12.403607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.648 [2024-07-23 14:11:12.403615] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.648 [2024-07-23 14:11:12.403621] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.648 [2024-07-23 14:11:12.405490] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.648 [2024-07-23 14:11:12.414478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.648 [2024-07-23 14:11:12.414970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.415321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.415332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.648 [2024-07-23 14:11:12.415340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.648 [2024-07-23 14:11:12.415457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.648 [2024-07-23 14:11:12.415574] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.648 [2024-07-23 14:11:12.415582] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.648 [2024-07-23 14:11:12.415589] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.648 [2024-07-23 14:11:12.417417] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.648 [2024-07-23 14:11:12.426566] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.648 [2024-07-23 14:11:12.427000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.427383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.427394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.648 [2024-07-23 14:11:12.427402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.648 [2024-07-23 14:11:12.427490] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.648 [2024-07-23 14:11:12.427623] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.648 [2024-07-23 14:11:12.427632] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.648 [2024-07-23 14:11:12.427638] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.648 [2024-07-23 14:11:12.429344] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.648 Malloc0 00:29:21.648 14:11:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.648 14:11:12 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:21.648 14:11:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.648 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:29:21.648 [2024-07-23 14:11:12.438518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.648 [2024-07-23 14:11:12.439115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.439548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.648 [2024-07-23 14:11:12.439559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.648 [2024-07-23 14:11:12.439566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.649 [2024-07-23 14:11:12.439654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.649 [2024-07-23 14:11:12.439817] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.649 [2024-07-23 14:11:12.439825] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.649 [2024-07-23 14:11:12.439832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.649 [2024-07-23 14:11:12.441553] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.649 14:11:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.649 14:11:12 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:21.649 14:11:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.649 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:29:21.649 [2024-07-23 14:11:12.450644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.649 [2024-07-23 14:11:12.451240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.649 [2024-07-23 14:11:12.451601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:21.649 [2024-07-23 14:11:12.451612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xadd900 with addr=10.0.0.2, port=4420 00:29:21.649 [2024-07-23 14:11:12.451619] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadd900 is same with the state(5) to be set 00:29:21.649 [2024-07-23 14:11:12.451736] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadd900 (9): Bad file descriptor 00:29:21.649 [2024-07-23 14:11:12.451872] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:21.649 [2024-07-23 14:11:12.451880] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:21.649 [2024-07-23 14:11:12.451887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:21.649 [2024-07-23 14:11:12.453720] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:21.649 14:11:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.649 14:11:12 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:21.649 14:11:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.649 14:11:12 -- common/autotest_common.sh@10 -- # set +x 00:29:21.649 [2024-07-23 14:11:12.458645] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:21.649 [2024-07-23 14:11:12.462683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:21.649 14:11:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.649 14:11:12 -- host/bdevperf.sh@38 -- # wait 3429954 00:29:21.649 [2024-07-23 14:11:12.648468] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:31.636 00:29:31.636 Latency(us) 00:29:31.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:31.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:31.636 Verification LBA range: start 0x0 length 0x4000 00:29:31.636 Nvme1n1 : 15.01 12133.61 47.40 18731.69 0.00 4135.08 1011.53 20059.71 00:29:31.636 =================================================================================================================== 00:29:31.636 Total : 12133.61 47.40 18731.69 0.00 4135.08 1011.53 20059.71 00:29:31.636 14:11:21 -- host/bdevperf.sh@39 -- # sync 00:29:31.636 14:11:21 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:31.636 14:11:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:31.636 14:11:21 -- common/autotest_common.sh@10 -- # set +x 00:29:31.636 14:11:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:31.636 14:11:21 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:31.636 14:11:21 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:31.636 14:11:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:31.636 14:11:21 -- nvmf/common.sh@116 -- # sync 00:29:31.636 14:11:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:31.636 14:11:21 -- nvmf/common.sh@119 -- # set +e 00:29:31.636 14:11:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:31.636 14:11:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:31.636 rmmod nvme_tcp 00:29:31.636 rmmod nvme_fabrics 00:29:31.636 rmmod nvme_keyring 00:29:31.636 14:11:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:31.636 14:11:21 -- nvmf/common.sh@123 -- # set -e 00:29:31.636 14:11:21 -- nvmf/common.sh@124 -- # return 0 00:29:31.636 14:11:21 -- nvmf/common.sh@477 -- # '[' -n 3431034 ']' 00:29:31.636 14:11:21 -- nvmf/common.sh@478 -- # killprocess 3431034 00:29:31.636 14:11:21 -- common/autotest_common.sh@926 -- # '[' -z 3431034 ']' 00:29:31.636 14:11:21 -- common/autotest_common.sh@930 -- # kill -0 3431034 00:29:31.636 14:11:21 -- common/autotest_common.sh@931 -- # uname 00:29:31.636 14:11:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:31.636 14:11:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3431034 00:29:31.636 14:11:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:31.636 14:11:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:31.636 14:11:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3431034' 00:29:31.636 killing process with pid 3431034 00:29:31.636 14:11:21 -- common/autotest_common.sh@945 -- # kill 3431034 00:29:31.636 14:11:21 -- common/autotest_common.sh@950 -- # wait 3431034 00:29:31.636 14:11:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:31.636 14:11:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:31.636 14:11:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:31.636 14:11:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:31.636 14:11:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:31.636 14:11:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.636 14:11:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.636 14:11:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.018 14:11:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:33.018 00:29:33.018 real 0m26.145s 00:29:33.018 user 1m3.143s 00:29:33.018 sys 0m6.198s 00:29:33.018 14:11:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:33.018 14:11:23 -- common/autotest_common.sh@10 -- # set +x 00:29:33.018 ************************************ 00:29:33.018 END TEST nvmf_bdevperf 00:29:33.018 ************************************ 00:29:33.018 14:11:23 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:33.018 14:11:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:33.018 14:11:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:33.018 14:11:23 -- common/autotest_common.sh@10 -- # set +x 00:29:33.018 ************************************ 00:29:33.018 START TEST nvmf_target_disconnect 00:29:33.018 ************************************ 00:29:33.018 14:11:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:33.018 * Looking for test storage... 00:29:33.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:33.018 14:11:23 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:33.018 14:11:23 -- nvmf/common.sh@7 -- # uname -s 00:29:33.018 14:11:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:33.018 14:11:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:33.018 14:11:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:33.018 14:11:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:33.018 14:11:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:33.018 14:11:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:33.018 14:11:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:33.018 14:11:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:33.018 14:11:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:33.018 14:11:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:33.018 14:11:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.018 14:11:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:33.018 14:11:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:33.018 14:11:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:33.018 14:11:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:33.018 14:11:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:33.018 14:11:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:33.018 14:11:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:33.018 14:11:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:33.018 14:11:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.018 14:11:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.018 14:11:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.018 14:11:23 -- paths/export.sh@5 -- # export PATH 00:29:33.018 14:11:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:33.018 14:11:23 -- nvmf/common.sh@46 -- # : 0 00:29:33.018 14:11:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:33.018 14:11:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:33.018 14:11:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:33.018 14:11:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:33.018 14:11:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:33.018 14:11:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:33.018 14:11:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:33.018 14:11:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:33.018 14:11:23 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:33.018 14:11:23 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:33.018 14:11:23 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:33.018 14:11:23 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:33.018 14:11:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:33.018 14:11:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.018 14:11:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:33.018 14:11:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:33.018 14:11:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:33.018 14:11:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.018 14:11:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:33.018 14:11:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.018 14:11:23 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:33.018 14:11:23 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:33.018 14:11:23 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:33.018 14:11:23 -- common/autotest_common.sh@10 -- # set +x 00:29:38.292 14:11:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:38.292 14:11:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:38.292 14:11:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:38.292 14:11:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:38.292 14:11:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:38.292 14:11:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:38.292 14:11:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:38.292 14:11:28 -- nvmf/common.sh@294 -- # net_devs=() 00:29:38.292 14:11:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:38.292 14:11:28 -- nvmf/common.sh@295 -- # e810=() 00:29:38.292 14:11:28 -- nvmf/common.sh@295 -- # local -ga e810 00:29:38.292 14:11:28 -- nvmf/common.sh@296 -- # x722=() 00:29:38.292 14:11:28 -- nvmf/common.sh@296 -- # local -ga x722 00:29:38.292 14:11:28 -- nvmf/common.sh@297 -- # mlx=() 00:29:38.292 14:11:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:38.292 14:11:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.292 14:11:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:38.292 14:11:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:38.292 14:11:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:38.292 14:11:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:38.292 14:11:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:38.292 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:38.292 14:11:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:38.292 14:11:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:38.292 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:38.292 14:11:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:38.292 14:11:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:38.292 14:11:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.292 14:11:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:38.292 14:11:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.292 14:11:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:38.292 Found net devices under 0000:86:00.0: cvl_0_0 00:29:38.292 14:11:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.292 14:11:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:38.292 14:11:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.292 14:11:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:38.292 14:11:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.292 14:11:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:38.292 Found net devices under 0000:86:00.1: cvl_0_1 00:29:38.292 14:11:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.292 14:11:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:38.292 14:11:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:38.292 14:11:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:38.292 14:11:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:38.292 14:11:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.292 14:11:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.292 14:11:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.292 14:11:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:38.292 14:11:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.292 14:11:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.292 14:11:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:38.292 14:11:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.292 14:11:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.292 14:11:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:38.292 14:11:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:38.292 14:11:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.292 14:11:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.292 14:11:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.292 14:11:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.292 14:11:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:38.292 14:11:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.292 14:11:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.292 14:11:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.292 14:11:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:38.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:29:38.292 00:29:38.292 --- 10.0.0.2 ping statistics --- 00:29:38.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.292 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:29:38.292 14:11:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:29:38.292 00:29:38.292 --- 10.0.0.1 ping statistics --- 00:29:38.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.292 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:29:38.292 14:11:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.293 14:11:28 -- nvmf/common.sh@410 -- # return 0 00:29:38.293 14:11:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:38.293 14:11:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.293 14:11:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:38.293 14:11:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:38.293 14:11:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.293 14:11:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:38.293 14:11:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:38.293 14:11:28 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:38.293 14:11:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:38.293 14:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:38.293 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:29:38.293 ************************************ 00:29:38.293 START TEST nvmf_target_disconnect_tc1 00:29:38.293 ************************************ 00:29:38.293 14:11:28 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:29:38.293 14:11:28 -- host/target_disconnect.sh@32 -- # set +e 00:29:38.293 14:11:28 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.293 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.293 [2024-07-23 14:11:28.563444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.293 [2024-07-23 14:11:28.563852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:38.293 [2024-07-23 14:11:28.563865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x859610 with addr=10.0.0.2, port=4420 00:29:38.293 [2024-07-23 14:11:28.563886] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:38.293 [2024-07-23 14:11:28.563898] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:38.293 [2024-07-23 14:11:28.563905] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:38.293 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:38.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:38.293 Initializing NVMe Controllers 00:29:38.293 14:11:28 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:38.293 14:11:28 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:38.293 14:11:28 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:29:38.293 14:11:28 -- common/autotest_common.sh@1132 -- # return 0 00:29:38.293 14:11:28 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:38.293 14:11:28 -- host/target_disconnect.sh@41 -- # set -e 00:29:38.293 00:29:38.293 real 0m0.087s 00:29:38.293 user 0m0.034s 00:29:38.293 sys 0m0.052s 00:29:38.293 14:11:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.293 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:29:38.293 ************************************ 00:29:38.293 END TEST nvmf_target_disconnect_tc1 00:29:38.293 ************************************ 00:29:38.293 14:11:28 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:38.293 14:11:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:38.293 14:11:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:38.293 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:29:38.293 ************************************ 00:29:38.293 START TEST nvmf_target_disconnect_tc2 00:29:38.293 ************************************ 00:29:38.293 14:11:28 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:29:38.293 14:11:28 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:29:38.293 14:11:28 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:38.293 14:11:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:38.293 14:11:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:38.293 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:29:38.293 14:11:28 -- nvmf/common.sh@469 -- # nvmfpid=3435923 00:29:38.293 14:11:28 -- nvmf/common.sh@470 -- # waitforlisten 3435923 00:29:38.293 14:11:28 -- common/autotest_common.sh@819 -- # '[' -z 3435923 ']' 00:29:38.293 14:11:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.293 14:11:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:38.293 14:11:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.293 14:11:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:38.293 14:11:28 -- common/autotest_common.sh@10 -- # set +x 00:29:38.293 14:11:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:38.293 [2024-07-23 14:11:28.657376] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:38.293 [2024-07-23 14:11:28.657417] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.293 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.293 [2024-07-23 14:11:28.726960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.293 [2024-07-23 14:11:28.803226] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:38.293 [2024-07-23 14:11:28.803339] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.293 [2024-07-23 14:11:28.803347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.293 [2024-07-23 14:11:28.803354] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.293 [2024-07-23 14:11:28.803470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:38.293 [2024-07-23 14:11:28.803576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:38.293 [2024-07-23 14:11:28.803684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:38.293 [2024-07-23 14:11:28.803683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:38.553 14:11:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:38.553 14:11:29 -- common/autotest_common.sh@852 -- # return 0 00:29:38.553 14:11:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:38.553 14:11:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 14:11:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.553 14:11:29 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:38.553 14:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 Malloc0 00:29:38.553 14:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.553 14:11:29 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:38.553 14:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 [2024-07-23 14:11:29.505108] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.553 14:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.553 14:11:29 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:38.553 14:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 14:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.553 14:11:29 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.553 14:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 14:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.553 14:11:29 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.553 14:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 [2024-07-23 14:11:29.533314] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.553 14:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.553 14:11:29 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:38.553 14:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:38.553 14:11:29 -- common/autotest_common.sh@10 -- # set +x 00:29:38.553 14:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:38.553 14:11:29 -- host/target_disconnect.sh@50 -- # reconnectpid=3436114 00:29:38.553 14:11:29 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:38.553 14:11:29 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.812 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.725 14:11:31 -- host/target_disconnect.sh@53 -- # kill -9 3435923 00:29:40.725 14:11:31 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 [2024-07-23 14:11:31.558695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Write completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.725 Read completed with error (sct=0, sc=8) 00:29:40.725 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 [2024-07-23 14:11:31.558900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 [2024-07-23 14:11:31.559095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Write completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 Read completed with error (sct=0, sc=8) 00:29:40.726 starting I/O failed 00:29:40.726 [2024-07-23 14:11:31.559384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:40.726 [2024-07-23 14:11:31.559677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.560086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.560120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.560485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.560891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.560921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.561285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.561640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.561669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.562067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.562451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.562480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.562869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.563306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.563320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.563615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.564056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.564087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.564449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.564788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.564801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.565223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.565629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.565659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.566130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.566583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.566612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.567111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.567446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.567475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.726 qpair failed and we were unable to recover it. 00:29:40.726 [2024-07-23 14:11:31.567805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.726 [2024-07-23 14:11:31.568216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.568230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.568594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.568997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.569026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.569503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.569912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.569927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.570235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.570597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.570626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.571041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.571461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.571491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.571856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.572207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.572236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.572580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.573054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.573091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.573522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.574017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.574058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.574472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.574841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.574856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.575288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.575649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.575663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.576084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.576507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.576520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.576826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.577259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.577273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.577631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.578118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.578147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.578615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.578995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.579024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.580239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.580567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.580583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.580935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.581307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.581321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.581637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.582029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.582068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.582631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.583108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.583138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.583532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.583977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.584007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.584384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.585915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.585941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.586365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.586808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.586838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.587247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.587628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.587658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.592056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.592396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.592413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.592722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.593170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.593192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.593559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.593961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.593980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.594409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.594796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.594816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.595250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.595564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.595579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.595982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.596351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.596368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.596760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.597117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.727 [2024-07-23 14:11:31.597132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.727 qpair failed and we were unable to recover it. 00:29:40.727 [2024-07-23 14:11:31.597456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.597814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.597827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.598259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.598558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.598571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.599033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.599385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.599399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.599755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.600192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.600206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.600590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.600876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.600889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.601303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.601597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.601611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.602066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.602423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.602436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.602939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.603372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.603386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.603774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.604110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.604124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.604661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.605122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.605136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.605554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.605999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.606012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.606501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.606818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.606832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.607263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.607625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.607639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.608025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.608426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.608440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.608801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.609171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.609185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.609537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.609894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.609907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.610388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.610816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.610830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.611237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.611636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.611649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.612000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.612453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.612467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.612773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.613178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.613192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.613601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.613969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.613982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.614373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.614725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.614738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.615087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.615524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.615538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.616009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.616433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.616457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.616818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.617333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.617349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.617662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.618088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.618102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.618470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.618776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.618790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.619153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.619512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.619526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.728 [2024-07-23 14:11:31.619840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.620214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.728 [2024-07-23 14:11:31.620231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.728 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.620608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.621123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.621137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.621584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.622052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.622065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.622483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.622914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.622927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.623226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.623585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.623598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.624046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.624411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.624425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.624802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.625244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.625258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.625615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.626050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.626064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.626370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.626675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.626689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.627126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.627433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.627446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.627896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.628325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.628340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.628658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.629028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.629041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.629354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.629696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.629710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.630127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.630487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.630500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.630969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.631323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.631337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.631750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.632109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.632122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.632578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.633035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.633052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.633352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.633736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.633749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.634074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.634428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.634441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.634801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.635148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.635162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.635529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.635964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.635977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.636410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.636720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.636733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.637087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.637494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.637507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.637905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.638274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.638287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.638654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.639081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.639096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.639552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.639935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.639948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.640317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.640668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.640682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.641029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.641446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.641459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.641884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.642308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.642322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.729 qpair failed and we were unable to recover it. 00:29:40.729 [2024-07-23 14:11:31.642622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.642981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.729 [2024-07-23 14:11:31.642995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.730 qpair failed and we were unable to recover it. 00:29:40.730 [2024-07-23 14:11:31.643364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.730 [2024-07-23 14:11:31.643730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.730 [2024-07-23 14:11:31.643743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.730 qpair failed and we were unable to recover it. 00:29:40.730 [2024-07-23 14:11:31.644181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.644486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.644500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.644941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.645373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.645388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.645742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.646160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.646176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.646537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.646847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.646861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.647316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.647667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.647680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.648089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.648430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.648445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.648755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.649211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.649226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.649599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.650052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.650065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.650391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.650697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.650711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.651069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.651636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.651650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.652128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.652560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.652574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.652889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.653303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.653317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.653619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.654083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.654097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.654457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.654811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.654824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.655207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.655531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.655545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.656039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.656427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.656440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.656953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.657427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.657441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.657720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.658124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.658139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.658499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.658791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.658805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.659228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.659579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.659593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.660105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.660415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.660431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.660789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.661171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.661185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.661618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.661986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.661999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.662367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.662773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.662787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.663179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.663538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.663551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.663802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.664157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.731 [2024-07-23 14:11:31.664172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.731 qpair failed and we were unable to recover it. 00:29:40.731 [2024-07-23 14:11:31.664477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.664931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.664945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.665307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.665663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.665676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.666169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.666479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.666493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.666928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.667384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.667399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.667712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.668060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.668076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.668429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.668732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.668745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.669122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.669484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.669497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.669919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.670362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.670376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.670740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.671240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.671254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.671641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.672090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.672104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.672516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.672822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.672835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.673207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.673519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.673532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.673922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.674272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.674287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.674645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.675111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.675126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.675547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.675902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.675916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.676355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.676734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.676747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.677200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.677508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.677521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.677825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.678232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.678245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.678553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.678916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.678929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.679376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.679678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.679691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.680063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.680411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.680425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.680838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.681180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.681195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.681510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.681867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.681882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.682232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.682617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.682631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.682999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.683286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.683301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.683658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.684088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.684102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.684584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.684925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.684938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.685228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.685599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.685613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.732 qpair failed and we were unable to recover it. 00:29:40.732 [2024-07-23 14:11:31.685966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.686341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.732 [2024-07-23 14:11:31.686354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.686650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.686991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.687004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.687439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.687748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.687762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.688139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.688495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.688509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.688798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.689171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.689184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.689565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.689925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.689939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.690299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.690604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.690617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.690962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.691259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.691274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.691613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.691911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.691924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.692265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.692624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.692637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.692955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.693252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.693266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.693641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.693927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.693940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.694249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.694622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.694636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.694939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.695289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.695304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.695611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.695897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.695911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.696261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.696597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.696610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.696961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.697303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.697325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.697627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.697912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.697925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.698269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.698745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.698758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.699149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.699369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.699381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.699811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.700090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.700104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.700386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.700739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.700753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.701034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.701356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.701370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.701738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.702040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.702058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.702481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.702797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.702811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.703229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.703523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.703536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.703946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.704241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.704255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.704665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.705020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.705036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.705425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.705704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.705716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.733 qpair failed and we were unable to recover it. 00:29:40.733 [2024-07-23 14:11:31.706014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.733 [2024-07-23 14:11:31.706357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.706371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.706707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.707054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.707068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.707416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.707791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.707805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.708224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.708592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.708605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.708988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.709286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.709300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.709604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.710046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.710061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.710414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.710733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.710747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.711176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.711537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.711552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.712012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.712359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.712373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.712741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.713041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.713063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.713415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.713888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.713901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.714377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.714676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.714689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.715118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.715487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.715501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.715799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.716227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.716240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.716603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.716964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.716978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.717367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.717715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.717728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.718007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.718410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.718425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.718727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.719175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.719189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.719558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.719878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.719891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.720248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.720615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.720629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.721083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.721431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.721444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.721812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.722183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.722197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.722549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.722967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.722980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.723419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.723794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.723807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.724227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.724589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.724602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.724965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.725277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.725291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.725666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.726096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.726112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.726523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.726866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.726880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.727233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.727613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.727627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:40.734 qpair failed and we were unable to recover it. 00:29:40.734 [2024-07-23 14:11:31.728156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.734 [2024-07-23 14:11:31.728547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.728559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.728856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.729148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.729158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.729507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.729869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.729879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.730214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.730519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.730528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.730898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.731261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.731271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.731624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.731921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.731931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.732311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.732665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.732674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.733024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.733451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.733462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.733817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.734273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.734283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.734684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.735034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.735047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.735460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.735811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.735821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.736242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.736590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:40.735 [2024-07-23 14:11:31.736599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:40.735 qpair failed and we were unable to recover it. 00:29:40.735 [2024-07-23 14:11:31.736970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-07-23 14:11:31.737332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.001 [2024-07-23 14:11:31.737343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.001 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.737744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.738099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.738109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.738460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.738809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.738819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.739163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.739458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.739468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.739804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.740146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.740156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.740456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.740887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.740898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.741309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.741651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.741662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.742041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.742387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.742398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.742748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.743185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.743195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.743490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.743793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.743802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.744161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.744436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.744447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.744801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.745210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.745220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.745524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.745801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.745811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.746234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.746531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.746541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.746848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.747213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.747223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.747574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.747859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.747868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.748299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.748669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.748679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.749027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.749319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.749329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.749615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.750081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.750092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.750395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.750744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.750753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.751048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.751346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.751356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.751763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.752198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.752209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.752512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.752917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.752927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.753394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.753746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.753756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.754100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.754498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.754508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.754812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.755160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.755170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.755574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.755938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.755948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.756361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.756763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.756773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.002 [2024-07-23 14:11:31.757121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.757479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.002 [2024-07-23 14:11:31.757490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.002 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.757890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.758321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.758331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.758730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.759167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.759178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.759530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.759890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.759900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.760324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.760675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.760684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.761083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.761425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.761435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.761786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.762140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.762150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.762550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.762992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.763002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.763369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.763815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.763825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.764229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.764648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.764658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.765003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.765362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.765377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.765801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.766171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.766181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.766552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.767027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.767048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.767404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.767820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.767833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.768214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.768631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.768645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.769113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.769418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.769431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.769788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.770217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.770231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.770524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.770978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.770992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.771400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.771814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.771826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.772184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.772599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.772613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.772987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.773356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.773372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.773749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.774103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.774117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.774441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.774849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.774862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.775272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.775629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.775642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.776112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.776474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.776488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.776884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.777312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.777326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.777686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.778041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.778059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.778370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.778654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.778667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.779116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.779469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.003 [2024-07-23 14:11:31.779483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.003 qpair failed and we were unable to recover it. 00:29:41.003 [2024-07-23 14:11:31.779935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.780364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.780378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.780812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.781197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.781212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.781526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.781837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.781851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.782144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.782443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.782456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.782771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.783178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.783192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.783571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.784008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.784022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.784524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.784880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.784894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.785305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.785611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.785625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.786068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.786413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.786426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.786789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.787205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.787220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.787601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.788058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.788072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.788437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.788842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.788856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.789241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.789598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.789611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.790073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.790396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.790410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.790768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.791184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.791198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.791561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.792019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.792032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.792441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.792803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.792817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.793248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.793555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.793568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.793985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.794350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.794364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.794772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.795178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.795191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.795486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.795824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.795837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.796198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.796520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.796534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.796836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.797196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.797211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.797562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.798031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.798049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.798502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.798864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.798877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.799167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.799571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.799585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.800018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.800480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.800494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.800852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.801216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.801230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.004 [2024-07-23 14:11:31.801538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.801852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.004 [2024-07-23 14:11:31.801866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.004 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.802229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.802600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.802613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.803051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.803397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.803412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.803772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.804136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.804149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.804498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.804798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.804812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.805217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.805575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.805588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.806013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.806426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.806440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.806804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.807231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.807245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.807638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.808089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.808102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.808474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.808813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.808827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.809255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.809562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.809575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.810009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.810478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.810493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.810874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.811237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.811251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.811612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.812066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.812080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.812434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.812786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.812801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.813226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.813630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.813644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.814040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.814361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.814375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.814762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.815222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.815236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.815538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.815928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.815941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.816343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.816700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.816714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.817133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.817480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.817493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.817908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.818467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.818481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.818826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.819190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.819204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.819653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.820060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.820073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.820435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.820889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.820902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.821333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.821637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.821650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.822104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.822401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.822414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.822825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.823252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.823266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.823624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.824097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.824110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.005 qpair failed and we were unable to recover it. 00:29:41.005 [2024-07-23 14:11:31.824479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.005 [2024-07-23 14:11:31.824838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.824851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.825273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.825634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.825647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.826094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.826497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.826510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.826807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.827176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.827189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.827548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.827939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.827952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.828263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.828623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.828635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.829054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.829432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.829446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.829810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.830262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.830275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.830649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.831028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.831041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.831436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.831786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.831799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.832256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.832681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.832694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.833132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.833496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.833509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.833942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.834301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.834315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.834759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.835117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.835131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.835438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.835788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.835801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.836234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.836540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.836553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.836932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.837302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.837315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.837753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.838176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.838190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.838622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.839062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.839075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.839483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.839922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.839935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.840365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.840777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.840791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.841171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.841600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.841613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.841914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.842362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.006 [2024-07-23 14:11:31.842375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.006 qpair failed and we were unable to recover it. 00:29:41.006 [2024-07-23 14:11:31.842746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.843095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.843109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.843558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.843960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.843974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.844403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.844836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.844849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.845280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.845595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.845608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.846029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.846465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.846478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.846784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.847124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.847137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.847434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.847789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.847802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.848242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.848827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.848840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.849295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.849730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.849743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.850176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.850592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.850605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.850991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.851415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.851429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.851808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.852213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.852226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.852663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.853067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.853080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.853660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.854113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.854129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.854487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.854922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.854935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.855295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.855596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.855615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.856029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.856391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.856405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.856834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.857235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.857249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.857668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.858018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.858031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.858494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.858954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.858967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.859406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.859821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.859834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.860264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.860641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.860654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.861087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.861464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.861477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.861917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.862344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.862358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.862723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.863178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.863191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.863618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.863992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.864005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.864436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.864842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.864854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.865273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.865695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.865708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.007 [2024-07-23 14:11:31.866069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.866498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.007 [2024-07-23 14:11:31.866511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.007 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.866920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.867274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.867288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.867650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.868054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.868067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.868426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.868775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.868788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.869220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.869624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.869637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.870055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.870467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.870479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.870910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.871258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.871271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.871620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.872051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.872065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.872496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.872863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.872876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.873231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.873569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.873582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.874008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.874362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.874376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.874780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.875193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.875207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.875636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.876073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.876086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.876455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.876794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.876807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.877209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.877634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.877647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.878006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.878421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.878434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.878866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.879215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.879228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.879682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.880086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.880100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.880526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.880875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.880888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.881340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.881767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.881779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.882187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.882613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.882627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.883057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.883461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.883474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.883913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.884286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.884300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.884725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.885104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.885117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.885415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.885759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.885772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.886125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.886527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.886539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.886956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.887389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.887402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.887745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.888100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.888113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.888476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.888880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.888893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.008 qpair failed and we were unable to recover it. 00:29:41.008 [2024-07-23 14:11:31.889256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.008 [2024-07-23 14:11:31.889704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.889717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.890078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.890527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.890539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.890914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.891290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.891303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.891709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.892138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.892152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.892559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.892971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.892985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.893416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.893840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.893853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.894281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.894637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.894650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.895027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.895468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.895485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.895838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.896121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.896135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.896563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.896992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.897005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.897359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.897777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.897789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.898217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.898642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.898655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.899062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.899478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.899491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.899793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.900193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.900207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.900626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.901029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.901046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.901459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.901798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.901812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.902243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.902580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.902593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.902998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.903412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.903428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.903855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.904213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.904227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.904636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.905051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.905064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.905482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.905917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.905930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.906289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.906692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.906705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.907127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.907463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.907476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.907856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.908206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.908220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.908652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.909083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.909097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.909502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.909899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.909912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.910272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.910720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.910732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.911086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.911425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.911437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.911845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.912273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.009 [2024-07-23 14:11:31.912287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.009 qpair failed and we were unable to recover it. 00:29:41.009 [2024-07-23 14:11:31.912713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.913142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.913155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.913562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.913924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.913937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.914347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.914691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.914704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.915138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.915553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.915566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.915965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.916396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.916410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.916762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.917183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.917196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.917593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.917954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.917969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.918352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.918787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.918800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.919230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.919661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.919674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.920107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.920461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.920473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.920928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.921354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.921368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.921725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.922178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.922191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.922573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.923004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.923018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.923386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.923680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.923693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.924055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.924482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.924495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.924928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.925288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.925302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.925709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.926070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.926084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.926434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.926861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.926874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.927218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.927646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.927660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.928037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.928486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.928499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.928907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.929328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.929341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.929744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.930093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.930106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.930448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.930852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.930865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.931216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.931642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.931655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.932004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.932416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.932430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.932787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.933213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.933226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.933586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.933948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.933961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.934381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.934784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.934797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.010 qpair failed and we were unable to recover it. 00:29:41.010 [2024-07-23 14:11:31.935218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.010 [2024-07-23 14:11:31.935645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.935658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.936065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.936482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.936495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.936927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.937299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.937313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.937742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.938144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.938158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.938577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.938933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.938946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.939375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.939728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.939741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.940198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.940626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.940639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.941067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.941471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.941483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.941900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.942326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.942339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.942723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.943067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.943081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.943515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.943917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.943931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.944296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.944696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.944712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.945130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.945533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.945546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.945888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.946240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.946253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.946685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.946992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.947005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.947414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.947761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.947774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.948244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.948714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.948727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.949192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.949618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.949631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.949993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.950446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.950460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.950825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.951276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.951289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.951718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.952068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.952082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.952447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.952851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.952864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.953280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.953668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.953682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.954117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.954549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.954562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.011 qpair failed and we were unable to recover it. 00:29:41.011 [2024-07-23 14:11:31.954991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.011 [2024-07-23 14:11:31.955347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.955361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.955701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.956104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.956118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.956552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.956979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.956992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.957420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.957819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.957833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.958263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.958667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.958680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.959098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.959538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.959551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.959915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.960273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.960286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.960691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.961107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.961121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.961555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.961956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.961969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.962398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.962753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.962766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.963174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.963536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.963549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.964000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.964425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.964438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.964843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.965268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.965282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.965631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.966084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.966097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.966531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.966956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.966970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.967293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.967665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.967677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.968107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.968503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.968516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.968925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.969293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.969306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.969711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.970136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.970158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.970576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.971013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.971026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.971462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.971874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.971890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.972248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.972653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.972668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.973025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.973412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.973427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.973834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.974248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.974267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.974628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.975000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.975012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.975370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.975726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.975743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.976181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.976611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.976624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.977058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.977499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.977514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.012 [2024-07-23 14:11:31.977922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.978283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.012 [2024-07-23 14:11:31.978299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.012 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.978661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.979109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.979127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.979546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.979897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.979910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.980213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.980563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.980579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.980945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.981305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.981319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.981762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.982194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.982212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.982587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.983017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.983029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.983465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.983899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.983914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.984325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.984735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.984751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.985161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.985563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.985576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.986012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.986423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.986441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.986850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.987282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.987295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.987743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.988131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.988146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.990061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.990506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.990523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.990958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.991399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.991415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.991851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.992222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.992236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.992597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.993003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.993021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.993464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.993837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.993855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.994291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.994652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.994665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.995028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.995471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.995488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.995865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.996305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.996318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.996754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.997189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.997206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.997585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.997998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.998016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.998453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.998882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.998896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.999200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.999561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:31.999578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:31.999882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.000330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.000344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:32.000737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.001112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.001128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:32.001493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.002026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.002039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:32.002502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.002929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.002945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.013 qpair failed and we were unable to recover it. 00:29:41.013 [2024-07-23 14:11:32.003325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.013 [2024-07-23 14:11:32.003759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.003778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.014 [2024-07-23 14:11:32.004220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.004589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.004604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.014 [2024-07-23 14:11:32.008055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.008463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.008484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.014 [2024-07-23 14:11:32.008866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.009234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.009247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.014 [2024-07-23 14:11:32.009608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.010072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.010090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.014 [2024-07-23 14:11:32.010476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.010835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.010849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.014 [2024-07-23 14:11:32.011306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.011667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.014 [2024-07-23 14:11:32.011681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.014 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.012137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.012542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.012560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.012872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.013282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.013295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.013697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.014122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.014138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.014518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.014876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.014887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.015319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.015632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.015651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.016083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.016471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.016485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.016968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.017391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.017405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.017834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.018268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.018285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.018671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.019091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.019103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.019462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.019821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.019834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.280 [2024-07-23 14:11:32.020128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.020534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.280 [2024-07-23 14:11:32.020546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.280 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.020954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.021394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.021410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.021805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.022184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.022199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.022610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.023065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.023083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.023382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.023786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.023800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.024147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.024496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.024513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.024859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.025294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.025309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.025737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.026148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.026165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.026600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.027031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.027058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.027432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.027783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.027797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.028226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.028657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.028674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.029107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.029539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.029552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.029952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.030392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.030409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.030709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.031112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.031124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.031495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.031919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.031937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.032389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.032754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.032768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.033200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.033607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.033621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.034035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.034444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.034461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.034873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.035218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.035231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.035614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.036055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.036072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.036492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.036847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.036861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.281 qpair failed and we were unable to recover it. 00:29:41.281 [2024-07-23 14:11:32.037272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.037578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.281 [2024-07-23 14:11:32.037594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.037952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.038373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.038388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.038749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.040054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.040075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.040474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.040857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.040872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.041249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.041655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.041666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.042089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.042535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.042550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.042981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.043391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.043409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.043842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.044229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.044242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.044601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.044958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.044973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.045327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.045706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.045718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.046095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.046508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.046524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.046952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.047373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.047389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.047817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.048251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.048266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.048698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.049105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.049123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.051052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.051517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.051536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.051944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.052383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.052401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.052766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.053171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.053183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.053600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.054037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.054064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.054498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.054931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.054945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.055379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.055738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.055752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.056205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.056609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.056626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.282 [2024-07-23 14:11:32.057046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.057441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.282 [2024-07-23 14:11:32.057454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.282 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.057795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.058204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.058220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.058652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.059162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.059177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.059542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.059967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.059981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.060419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.060767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.060784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.061193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.061560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.061572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.062022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.062382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.062397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.062851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.063282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.063295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.063703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.064134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.064150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.064444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.064812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.064827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.065177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.065554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.065568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.066022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.066438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.066457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.066894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.067266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.067284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.067654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.069056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.069078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.069521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.069879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.069896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.070261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.070685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.070698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.071129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.071566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.071582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.071951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.072383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.072396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.072828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.073185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.073200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.073629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.074032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.074056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.074470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.074765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.074777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.075134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.075541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.075558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.283 qpair failed and we were unable to recover it. 00:29:41.283 [2024-07-23 14:11:32.075969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.283 [2024-07-23 14:11:32.076401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.076414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.076798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.077234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.077248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.077676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.078061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.078076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.078427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.078885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.078900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.079358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.079785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.079795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.080221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.080642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.080652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.080945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.081368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.081378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.081798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.082205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.082215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.082631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.083049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.083059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.083467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.083834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.083844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.084198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.084620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.084630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.084984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.085403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.085413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.085815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.086238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.086248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.086651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.087071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.087081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.087384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.087808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.087821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.088195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.088650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.088663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.089020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.089452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.089465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.089893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.090319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.090332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.090690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.091093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.091107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.091525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.091949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.091962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.092266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.092671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.092683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:41.284 qpair failed and we were unable to recover it. 00:29:41.284 [2024-07-23 14:11:32.093160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.093587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.284 [2024-07-23 14:11:32.093605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69b8000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.093697] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1254200 is same with the state(5) to be set 00:29:41.285 [2024-07-23 14:11:32.094163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.094617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.094630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.095058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.095423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.095434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.095845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.096134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.096145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.096501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.096946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.096957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.097299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.097701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.097713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.098020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.098443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.098454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.098807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.099235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.099246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.099627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.099995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.100006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.100408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.100828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.100839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.101260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.101722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.101750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.102191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.102665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.102694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.103165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.103628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.103657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.104052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.104516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.104545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.104912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.105349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.105379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.105875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.106356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.106387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.106864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.107267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.107297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.107761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.108209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.108219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.108549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.108958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.108987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.109393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.109765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.109794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.110242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.110634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.110662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.285 qpair failed and we were unable to recover it. 00:29:41.285 [2024-07-23 14:11:32.111108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.111547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.285 [2024-07-23 14:11:32.111583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.111952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.112410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.112441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.112867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.113251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.113281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.113553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.113922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.113951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.114289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.114749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.114777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.115235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.115598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.115626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.115999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.116444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.116454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.116880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.117287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.117316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.117757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.118191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.118222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.118729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.119117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.119148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.119591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.119992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.120021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.120444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.120848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.120878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.121320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.121782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.121811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.122231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.122517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.122545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.123004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.123478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.123509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.123886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.124324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.124354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.286 [2024-07-23 14:11:32.124753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.125199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.286 [2024-07-23 14:11:32.125209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.286 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.125550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.125894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.125922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.126384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.126772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.126801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.127184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.127622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.127650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.128062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.128519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.128547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.128944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.129401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.129430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.129897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.130380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.130409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.130803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.131188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.131217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.131612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.131862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.131890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.132337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.132733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.132761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.133145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.133541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.133569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.134033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.134397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.134426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.134842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.135219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.135229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.135655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.136004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.136033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.136472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.136915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.136945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.137280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.137655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.137683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.138062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.138471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.138499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.138887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.139262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.139291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.139703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.140145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.140174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.140641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.141098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.141127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.141577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.142029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.142075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.142520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.142891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.142919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.287 qpair failed and we were unable to recover it. 00:29:41.287 [2024-07-23 14:11:32.143307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.287 [2024-07-23 14:11:32.143781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.143810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.144292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.144754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.144782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.145214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.145626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.145655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.146036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.146502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.146531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.146998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.147387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.147416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.147797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.148091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.148120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.148563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.148958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.148994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.149336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.149743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.149772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.150153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.150616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.150645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.151034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.151477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.151506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.151906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.152368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.152377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.152740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.153174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.153204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.153670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.154062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.154092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.154509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.154905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.154934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.155323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.155675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.155702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.156145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.156459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.156487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.156938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.157409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.157438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.157847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.158164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.158174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.158539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.158920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.158948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.159390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.159643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.159672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.160111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.160568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.160596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.288 qpair failed and we were unable to recover it. 00:29:41.288 [2024-07-23 14:11:32.161088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.161485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.288 [2024-07-23 14:11:32.161513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.161928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.162386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.162415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.162880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.163259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.163288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.163679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.163903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.163932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.164279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.164658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.164687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.165151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.165648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.165676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.166138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.166492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.166520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.166981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.167311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.167339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.167677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.168132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.168161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.168578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.168899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.168927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.169337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.169814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.169853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.170139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.170556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.170584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.170985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.171457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.171486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.171810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.172270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.172299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.172693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.173065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.173095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.173431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.173866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.173894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.174203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.174638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.174666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.175130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.175532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.175560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.175945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.176405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.176435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.176831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.177211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.177240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.177705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.178099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.178128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.289 qpair failed and we were unable to recover it. 00:29:41.289 [2024-07-23 14:11:32.178617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.289 [2024-07-23 14:11:32.179072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.179101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.179489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.179950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.179978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.180314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.180655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.180683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.181161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.181550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.181579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.181966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.182369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.182379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.182713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.183085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.183114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.183456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.183785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.183814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.184193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.184597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.184625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.185011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.185347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.185375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.185842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.186309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.186338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.186509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.186945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.186973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.187413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.187848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.187877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.188347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.188783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.188811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.189224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.189617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.189645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.190106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.190504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.190533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.190872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.191272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.191301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.191777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.192215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.192244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.192716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.193177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.193206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.193619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.194009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.194038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.194375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.194768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.194801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.195142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.195528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.195556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.290 [2024-07-23 14:11:32.196051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.196485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.290 [2024-07-23 14:11:32.196513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.290 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.196905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.197287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.197296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.197640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.198095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.198124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.198497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.198936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.198964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.199308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.199742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.199770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.200205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.200620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.200648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.201108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.201504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.201533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.201972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.202146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.202175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.202545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.202948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.202981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.203368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.203702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.203730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.204060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.204462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.204491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.204960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.205368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.205397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.205812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.206182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.206212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.206625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.207063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.207092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.207557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.207937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.207966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.208355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.208643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.208671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.209069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.209392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.209421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.209892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.210361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.210390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.210831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.211291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.211331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.211804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.212255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.212284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.212673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.213134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.213164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.213631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.214066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.214095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.291 qpair failed and we were unable to recover it. 00:29:41.291 [2024-07-23 14:11:32.214565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.215023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.291 [2024-07-23 14:11:32.215061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.215510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.215854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.215864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.216267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.216611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.216639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.217030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.217421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.217449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.217860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.218263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.218291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.218748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.219135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.219164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.219560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.220017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.220060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.220508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.220968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.220996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.221379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.221766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.221794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.222196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.222578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.222606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.222835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.223217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.223226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.223595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.224061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.224102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.224250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.224612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.224622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.225039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.225483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.225512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.225981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.226360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.226389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.226855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.227316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.292 [2024-07-23 14:11:32.227345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.292 qpair failed and we were unable to recover it. 00:29:41.292 [2024-07-23 14:11:32.227738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.228108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.228137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.228527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.228977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.229006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.229434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.229841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.229871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.230184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.230623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.230652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.231035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.231518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.231546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.232010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.232406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.232436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.232899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.233336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.233366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.233743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.234150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.234179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.234492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.234861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.234890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.235344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.235807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.235835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.236235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.236623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.236651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.237099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.237484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.237512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.237743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.238176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.238205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.238592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.238985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.239013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.239421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.239804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.239832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.240126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.240619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.240647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.241070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.241375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.241403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.241726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.242184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.242194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.242636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.243086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.243115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.243581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.243947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.243978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.244406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.244881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.244909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.293 [2024-07-23 14:11:32.245304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.245717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.293 [2024-07-23 14:11:32.245746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.293 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.246130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.246564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.246593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.246989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.247442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.247471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.247873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.248304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.248314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.248651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.248929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.248938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.249364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.249698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.249726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.249976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.250433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.250463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.250914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.251394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.251423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.251813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.252193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.252222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.252594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.253028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.253064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.253445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.253905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.253934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.254402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.254888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.254917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.255257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.255627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.255655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.256112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.256507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.256535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.256931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.257315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.257345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.257757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.258204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.258233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.258697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.259097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.259126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.259571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.260033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.260070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.260448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.260853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.260882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.261289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.261726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.261755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.262081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.262399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.262427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.262838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.263303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.263313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.294 qpair failed and we were unable to recover it. 00:29:41.294 [2024-07-23 14:11:32.263733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.294 [2024-07-23 14:11:32.264107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.264136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.264392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.264632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.264660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.264995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.265387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.265416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.265882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.266288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.266318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.266826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.267209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.267238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.267652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.268113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.268142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.268603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.268914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.268943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.269382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.269754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.269781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.270180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.270641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.270669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.271136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.271515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.271543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.271874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.272260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.272289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.272788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.273208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.273237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.273703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.274170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.274200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.274591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.275058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.275088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.275535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.275996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.276025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.276500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.276871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.276900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.277343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.277623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.277632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.277985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.278363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.278392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.278773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.279231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.279260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.279727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.280179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.280189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.280594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.280912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.280941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.295 qpair failed and we were unable to recover it. 00:29:41.295 [2024-07-23 14:11:32.281430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.281705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.295 [2024-07-23 14:11:32.281715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.282013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.282401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.282431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.282913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.283389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.283418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.283859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.284319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.284348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.284814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.285201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.285230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.285535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.285879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.285889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.286223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.286633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.286662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.287133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.287457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.287485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.287927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.288366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.288376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.288775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.289210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.289220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.289577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.289931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.289941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.290230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.290421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.290449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.290850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.291304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.296 [2024-07-23 14:11:32.291314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.296 qpair failed and we were unable to recover it. 00:29:41.296 [2024-07-23 14:11:32.291593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.292017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.292028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.561 qpair failed and we were unable to recover it. 00:29:41.561 [2024-07-23 14:11:32.292381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.292798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.292807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.561 qpair failed and we were unable to recover it. 00:29:41.561 [2024-07-23 14:11:32.293248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.293694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.293722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.561 qpair failed and we were unable to recover it. 00:29:41.561 [2024-07-23 14:11:32.294099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.294470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.294499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.561 qpair failed and we were unable to recover it. 00:29:41.561 [2024-07-23 14:11:32.294837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.295220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.561 [2024-07-23 14:11:32.295230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.561 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.295648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.296006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.296035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.296278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.296713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.296741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.297139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.297522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.297550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.297991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.298450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.298480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.298923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.299235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.299265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.299677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.300137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.300166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.300625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.301081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.301115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.301474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.301853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.301881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.302347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.302679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.302707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.303096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.303570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.303598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.304011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.304449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.304478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.304874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.305253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.305282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.305684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.306073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.306102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.306517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.306896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.306924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.307231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.307666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.307694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.308166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.308490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.308518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.308839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.309074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.309103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.309520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.309824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.309867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.310268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.310619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.310630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.310971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.311317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.311330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.311560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.311931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.311952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.312394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.312754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.312764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.313120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.313472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.313483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.313780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.314208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.314220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.314620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.315052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.315063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.315412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.315776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.315786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.316076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.316474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.316485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.316915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.317262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.317273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.562 qpair failed and we were unable to recover it. 00:29:41.562 [2024-07-23 14:11:32.317672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.562 [2024-07-23 14:11:32.318006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.318017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.318350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.318753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.318764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.319062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.319429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.319440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.319800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.320143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.320154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.320449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.320809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.320819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.321174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.321519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.321530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.321905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.322283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.322295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.322645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.322999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.323010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.323357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.323755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.323766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.324138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.324563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.324574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.324921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.325262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.325273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.325694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.326101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.326115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.326309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.326660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.326671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.327029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.327382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.327394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.327820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.328236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.328247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.328623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.328960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.328971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.329319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.329742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.329753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.330119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.330451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.330462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.330817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.331173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.331185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.331484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.331825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.331835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.332195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.332564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.332575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.332930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.333352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.333370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.333797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.334006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.334016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.334473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.334829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.334840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.335257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.335551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.335561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.335924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.336275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.336287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.336634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.336992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.337004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.337373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.337669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.337680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.563 [2024-07-23 14:11:32.338103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.338510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.563 [2024-07-23 14:11:32.338521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.563 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.338934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.339334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.339345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.339689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.340034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.340055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.340409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.340833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.340846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.341324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.341663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.341673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.342026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.342376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.342387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.342724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.343173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.343185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.343609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.343971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.343981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.344169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.344620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.344631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.344988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.345201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.345212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.345655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.346072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.346083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.346442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.346865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.346877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.347291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.347637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.347647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.347998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.348398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.348411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.348841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.349263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.349274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.349692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.350140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.350151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.350562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.350824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.350834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.351168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.351460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.351470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.351842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.352173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.352184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.352607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.353034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.353055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.353444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.353795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.353805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.354230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.354656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.354666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.354958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.355299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.355310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.355659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.355850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.355860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.356226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.356600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.356610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.356892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.357289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.357300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.357704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.358089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.358119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.358498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.358886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.358915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.359326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.359763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.359791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.564 qpair failed and we were unable to recover it. 00:29:41.564 [2024-07-23 14:11:32.360133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.564 [2024-07-23 14:11:32.360571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.360599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.360978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.361383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.361393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.361797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.362119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.362150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.362545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.362878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.362907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.363300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.363741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.363769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.364160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.364551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.364560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.365005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.365387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.365418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.365866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.366243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.366273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.366675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.367070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.367101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.367567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.368032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.368074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.368475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.368938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.368966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.369434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.369738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.369747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.370168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.370514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.370542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.370874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.371356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.371387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.371835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.372286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.372316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.372707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.373167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.373197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.373585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.374034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.374077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.374490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.374883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.374912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.375299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.375687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.375696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.376037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.376459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.376487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.376870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.377261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.377291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.377771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.378153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.378184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.378515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.378969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.378978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.379311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.379788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.379817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.380154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.380613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.380642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.381094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.381477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.381506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.382145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.382516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.382545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.382933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.383398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.383429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.565 [2024-07-23 14:11:32.383831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.384293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.565 [2024-07-23 14:11:32.384323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.565 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.384765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.385162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.385193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.385585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.386020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.386058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.386522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.386933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.386961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.387373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.387828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.387837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.388188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.388487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.388515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.388963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.389353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.389382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.389774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.390235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.390264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.390650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.391111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.391140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.391531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.391932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.391960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.392226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.392594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.392622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.392959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.393181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.393210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.393652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.394124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.394153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.394481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.394886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.394894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.395298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.395693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.395721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.396183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.396648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.396657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.396998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.397421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.397431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.397785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.398245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.398274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.398675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.399002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.399030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.399499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.399945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.399954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.400261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.400615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.400644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.401061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.401512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.401541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.401924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.402283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.402312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.402647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.403094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.403123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.566 qpair failed and we were unable to recover it. 00:29:41.566 [2024-07-23 14:11:32.403570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.403942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.566 [2024-07-23 14:11:32.403970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.404391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.404734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.404763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.405082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.405312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.405340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.405810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.406060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.406089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.406443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.406876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.406905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.407346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.407685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.407714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.408090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.408527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.408555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.408942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.409412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.409441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.409655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.410078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.410107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.410327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.410723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.410751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.411193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.411668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.411697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.412086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.412494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.412524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.412921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.413385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.413415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.413813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.414233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.414262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.414714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.415107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.415136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.415575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.416038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.416099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.416561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.416949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.416978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.417419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.417807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.417817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.418174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.418521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.418550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.418939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.419323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.419352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.419818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.420208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.420236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.420628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.421032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.421071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.421512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.421946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.421974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.422415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.422806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.422835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.423276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.423500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.423530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.423865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.424252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.424282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.424746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.425182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.425213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.425689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.426101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.426131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.567 [2024-07-23 14:11:32.426444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.426887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.567 [2024-07-23 14:11:32.426916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.567 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.427316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.427725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.427754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.428197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.428655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.428684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.429096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.429555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.429583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.429945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.430300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.430330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.430801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.431198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.431228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.431668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.432106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.432134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.432520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.432836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.432864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.433158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.433599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.433628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.434068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.434527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.434556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.434966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.435429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.435458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.435900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.436291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.436320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.436705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.437108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.437137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.437581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.437758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.437786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.438192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.438653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.438681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.439126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.439592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.439620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.440013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.440471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.440501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.440942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.441269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.441299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.441708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.442090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.442120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.442434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.442815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.442849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.443036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.443399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.443409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.443770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.444140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.444169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.444479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.444900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.444909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.445337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.445775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.445803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.446265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.446598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.446626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.447067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.447370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.447403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.447843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.448306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.448354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.448815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.449251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.449281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.449741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.450176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.450206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.568 qpair failed and we were unable to recover it. 00:29:41.568 [2024-07-23 14:11:32.450542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.568 [2024-07-23 14:11:32.451001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.451030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.451500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.451873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.451902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.452315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.452710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.452738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.452962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.453367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.453396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.453871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.454102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.454131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.454528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.454906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.454934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.455331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.455555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.455568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.455920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.456323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.456352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.456737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.456929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.456938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.457366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.457753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.457782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.458172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.458560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.458588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.458892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.459245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.459275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.459670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.460075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.460104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.460544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.460927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.460955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.461187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.461591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.461619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.461944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.462371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.462381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.462761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.463074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.463108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.463558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.464015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.464051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.464441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.464829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.464858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.465326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.465775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.465803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.466243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.466614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.466641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.467102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.467391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.467418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.467834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.468264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.468294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.468694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.469025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.469062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.469402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.469834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.469862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.470331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.470717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.470746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.471141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.471599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.471633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.472074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.472511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.472540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.569 [2024-07-23 14:11:32.472720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.473057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.569 [2024-07-23 14:11:32.473067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.569 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.473472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.473812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.473840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.474312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.474773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.474801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.475171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.475543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.475571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.476034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.476479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.476508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.476845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.477217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.477246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.477688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.478006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.478034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.478488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.478951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.478979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.479299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.479689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.479698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.480098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.480470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.480499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.480893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.481345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.481374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.481759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.482141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.482170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.482582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.483055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.483085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.483476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.483701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.483730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.483955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.484418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.484447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.484890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.485231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.485260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.485598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.486059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.486088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.486434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.486870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.486899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.487289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.487685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.487714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.488134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.488528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.488556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.489025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.489500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.489534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.489940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.490401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.490430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.490866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.491296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.491306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.491666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.492089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.492119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.492586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.493020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.493056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.493467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.493788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.493817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.494279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.494434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.494463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.570 qpair failed and we were unable to recover it. 00:29:41.570 [2024-07-23 14:11:32.494917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.495256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.570 [2024-07-23 14:11:32.495284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.495754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.496212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.496241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.496708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.497014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.497060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.497462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.497700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.497708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.498052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.498472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.498481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.498604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.498946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.498975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.499369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.499773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.499782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.500149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.500555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.500583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.500968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.501430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.501460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.501800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.502283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.502312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.502781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.503104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.503133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.503598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.504022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.504031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.504410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.504777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.504786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.505199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.505554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.505582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.506052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.506451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.506480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.506878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.507235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.507264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.507710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.508174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.508202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.508577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.508973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.509001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.509473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.509907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.509934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.510376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.510812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.510841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.511229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.511669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.511698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.512135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.512508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.512537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.512934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.513260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.513289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.513674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.514059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.514089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.514424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.514871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.514899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.515281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.515746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.515775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.516239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.516580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.571 [2024-07-23 14:11:32.516608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.571 qpair failed and we were unable to recover it. 00:29:41.571 [2024-07-23 14:11:32.517074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.517327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.517355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.517840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.518223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.518252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.518666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.518894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.518922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.519372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.519762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.519790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.520104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.520447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.520456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.520885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.521270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.521300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.521710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.522166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.522195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.522662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.522983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.523012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.523481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.523905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.523933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.524275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.524663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.524691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.525134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.525520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.525548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.526014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.526480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.526509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.526673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.527068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.527097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.527568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.527904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.527933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.528324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.528707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.528735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.529193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.529647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.529675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.530118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.530577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.530605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.531001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.531342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.531371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.531682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.532064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.532093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.532562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.532947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.532975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.533374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.533751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.533779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.534217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.534608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.534637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.535094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.535478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.535507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.535891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.536259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.536289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.536679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.536997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.537025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.537502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.537842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.537871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.538245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.538635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.538663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.539121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.539580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.539608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.572 qpair failed and we were unable to recover it. 00:29:41.572 [2024-07-23 14:11:32.540016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.540425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.572 [2024-07-23 14:11:32.540455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.540909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.541291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.541320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.541758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.542196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.542225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.542628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.543002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.543030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.543425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.543884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.543912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.544298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.544687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.544716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.545129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.545471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.545499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.545962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.546446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.546476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.546871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.547251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.547281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.547745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.548117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.548146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.548470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.548872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.548900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.549389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.549768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.549796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.550237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.550615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.550644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.551039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.551433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.551442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.551794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.552136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.552176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.552575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.553063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.553093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.553567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.554027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.554064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.554515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.554910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.554939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.555284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.555617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.555626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.556075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.556472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.556500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.556965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.557399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.557429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.557820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.558193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.558222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.558631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.559012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.559041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.559449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.559676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.559704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.560203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.560606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.560616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.561038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.561398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.561409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.561766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.562174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.562184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.562523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.562809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.562820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.563156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.563525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.573 [2024-07-23 14:11:32.563536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.573 qpair failed and we were unable to recover it. 00:29:41.573 [2024-07-23 14:11:32.563829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.564229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.564240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.564527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.564947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.564957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.565261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.565597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.565608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.565940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.566227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.566238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.566611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.566961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.566972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.567307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.567743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.567754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.568138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.568559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.568570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.568992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.569344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.569355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.569712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.570061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.570073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.570370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.570719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.570729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.571061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.571431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.571442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.574 [2024-07-23 14:11:32.571722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.572061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.574 [2024-07-23 14:11:32.572071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.574 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.572277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.572729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.572740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.573039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.573448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.573459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.573824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.574183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.574195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.574628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.575046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.575060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.575366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.575720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.575731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.576032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.576395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.576406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.576716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.577051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.577069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.577338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.577690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.577702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.578053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.578473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.578484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.578933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.579289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.579299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.579643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.580094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.580105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.580527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.580956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.580968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.581348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.581798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.581808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.582153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.582437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.582447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.582796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.583197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.583208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.583548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.583978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.583989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.584329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.584545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.584558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.584954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.585375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.585386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.585809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.586092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.586103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.586534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.586941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.586952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.587371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.587713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.587724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.588029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.588387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.588398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.588821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.589234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.589248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.840 qpair failed and we were unable to recover it. 00:29:41.840 [2024-07-23 14:11:32.589383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.840 [2024-07-23 14:11:32.589737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.589747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.590146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.590577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.590589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.590959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.591149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.591160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.591461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.591790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.591803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.592227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.592576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.592588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.592990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.593277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.593289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.593631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.593973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.593985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.594337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.594738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.594749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.595083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.595507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.595519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.595871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.596208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.596219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.596588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.596866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.596877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.597276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.597573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.597584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.597934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.598228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.598241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.598643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.598981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.598994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.599441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.599579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.599589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.599934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.600334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.600346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.600750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.601181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.601193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.601596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.601967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.601978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.602384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.602810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.602820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.603246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.603616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.603627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.604051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.604482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.604493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.604896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.605342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.605353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.605706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.606015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.606025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.606421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.606887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.606916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.607317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.607756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.607785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.608251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.608635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.608663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.609104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.609571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.609599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.609990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.610386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.610416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.841 qpair failed and we were unable to recover it. 00:29:41.841 [2024-07-23 14:11:32.610857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.611236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.841 [2024-07-23 14:11:32.611266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.611661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.612063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.612094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.612536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.612860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.612889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.613329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.613777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.613805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.614304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.614648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.614677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.615065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.615553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.615581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.615968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.616352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.616383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.616850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.617175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.617205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.617647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.617990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.618018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.618431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.618890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.618919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.619241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.619646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.619655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.620013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.620509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.620540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.620873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.621254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.621285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.621739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.622120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.622150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.622564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.622967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.622996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.623446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.623824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.623852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.624305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.624766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.624800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.625229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.625602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.625631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.626008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.626401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.626432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.626827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.627269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.627300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.627683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.628179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.628209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.628653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.629132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.629170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.629619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.630005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.630034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.630504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.630881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.630910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.631375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.631815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.631843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.632239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.632669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.632678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.633063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.633242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.633271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.633663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.634050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.634080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.634408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.634791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.842 [2024-07-23 14:11:32.634820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.842 qpair failed and we were unable to recover it. 00:29:41.842 [2024-07-23 14:11:32.635152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.635611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.635639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.636083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.636471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.636499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.636875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.637342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.637372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.637815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.638195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.638225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.638631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.639091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.639120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.639500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.639931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.639959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.640397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.640833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.640861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.641258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.641591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.641620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.642009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.642417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.642447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.642887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.643264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.643293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.643659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.644011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.644039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.644535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.644917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.644945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.645325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.645709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.645737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.646174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.646608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.646636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.647054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.647489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.647518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.647926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.648312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.648341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.648780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.649149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.649178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.649539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.649996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.650025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.650427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.650814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.650843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.651256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.651730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.651759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.652149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.652450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.652459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.652862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.653274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.653304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.653775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.654224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.654253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.654656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.655110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.655139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.655531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.655903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.655932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.656431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.656772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.656800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.657239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.657397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.657425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.657895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.658329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.658358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.843 qpair failed and we were unable to recover it. 00:29:41.843 [2024-07-23 14:11:32.658748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.843 [2024-07-23 14:11:32.659204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.659233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.659679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.660060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.660089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.660497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.660958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.660986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.661458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.661898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.661933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.662260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.662637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.662645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.663055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.663400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.663428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.663899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.664272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.664301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.664749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.665210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.665239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.665642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.666024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.666063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.666456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.666927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.666956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.667421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.667814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.667842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.668304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.668619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.668647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.669087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.669549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.669577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.670063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.670450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.670477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.670894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.671327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.671357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.671828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.672264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.672293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.672764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.673213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.673222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.673553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.673999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.674027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.674476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.674948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.674976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.675361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.675750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.844 [2024-07-23 14:11:32.675779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.844 qpair failed and we were unable to recover it. 00:29:41.844 [2024-07-23 14:11:32.676185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.676623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.676633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.677057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.677423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.677433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.677860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.678241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.678270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.678714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.679034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.679073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.679447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.679841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.679870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.680267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.680724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.680752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.681233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.681477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.681486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.681852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.682166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.682194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.682630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.683054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.683083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.683537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.683912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.683942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.684386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.684850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.684879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.685264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.685721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.685750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.686142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.686534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.686562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.687001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.687391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.687420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.687883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.688316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.688345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.688786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.689160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.689170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.689607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.689989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.690017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.690462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.690842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.690871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.691337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.691745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.691774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.692107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.692572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.692600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.693041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.693276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.693305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.693529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.694016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.694051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.694513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.694956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.694984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.695397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.695779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.695807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.696184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.696556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.696584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.697074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.697529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.697557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.697935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.698247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.698256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.698655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.699062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.699091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.845 qpair failed and we were unable to recover it. 00:29:41.845 [2024-07-23 14:11:32.699512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.699947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.845 [2024-07-23 14:11:32.699976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.700374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.700842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.700870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.701335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.701716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.701744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.702127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.702583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.702611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.702989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.703357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.703386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.703853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.704287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.704316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.704782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.705193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.705223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.705552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.705950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.705978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.706421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.706856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.706884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.707271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.707702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.707711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.708116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.708568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.708596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.708976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.709355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.709389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.709832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.710211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.710240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.710558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.711013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.711049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.711532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.711920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.711949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.712393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.712829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.712858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.713249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.713576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.713605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.714071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.714457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.714485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.714829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.715307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.715348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.715770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.716103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.716132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.716556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.716838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.716867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.717336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.717820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.717853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.718296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.718685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.718714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.719182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.719638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.719666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.720107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.720569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.720598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.846 qpair failed and we were unable to recover it. 00:29:41.846 [2024-07-23 14:11:32.721041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.721437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.846 [2024-07-23 14:11:32.721466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.721865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.722266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.722275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.722628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.723049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.723058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.723365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.723806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.723815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.724242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.724688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.724717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.725108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.725546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.725574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.726039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.726428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.726462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.726881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.727318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.727348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.727806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.728200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.728233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.728817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.729276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.729306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.729748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.730141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.730170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.730589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.731023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.731060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.731442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.731849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.731878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.732234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.732571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.732581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.733013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.733147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.733156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.733583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.734053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.734083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.734489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.734955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.734989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.735457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.735804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.735832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.736199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.736354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.736364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.736654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.736843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.736852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.737230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.737656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.737684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.738124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.738455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.738483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.738939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.739297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.739307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.739678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.740011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.740040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.740383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.740728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.740757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.741215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.741612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.741640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.742106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.742578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.742607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.743082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.743521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.847 [2024-07-23 14:11:32.743550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.847 qpair failed and we were unable to recover it. 00:29:41.847 [2024-07-23 14:11:32.743978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.744360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.744389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.744782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.745092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.745122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.745597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.746060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.746090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.746480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.746935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.746964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.747291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.747670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.747680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.748036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.748443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.748472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.748863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.749272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.749282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.749688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.749982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.750010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.750347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.750781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.750809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.751145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.751613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.751641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.752083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.752458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.752487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.752939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.753174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.753203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.753642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.754035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.754071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.754518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.754976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.755005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.755316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.755751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.755779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.756235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.756521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.756550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.757016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.757488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.757519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.757973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.758388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.758418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.758800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.759195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.759206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.759636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.760095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.760137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.760558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.760878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.760907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.761312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.761615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.761644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.761902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.762362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.762392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.762706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.763062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.763088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.763372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.763809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.763838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.764176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.764613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.764641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.848 qpair failed and we were unable to recover it. 00:29:41.848 [2024-07-23 14:11:32.764951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.765430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.848 [2024-07-23 14:11:32.765459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.765851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.766300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.766330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.766695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.766937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.766965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.767366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.767692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.767701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.767971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.768234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.768243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.768664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.768963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.768991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.769496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.769940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.769968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.770296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.770757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.770786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.771207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.771614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.771643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.772085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.772476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.772504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.772816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.773136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.773165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.773669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.774085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.774115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.774555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.774941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.774970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.775442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.775827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.775836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.776127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.776514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.776542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.777008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.777314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.777342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.777770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.778199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.778228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.778695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.779018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.779054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.779520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.779676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.779705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.780107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.780457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.780486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.780926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.781390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.781419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.849 qpair failed and we were unable to recover it. 00:29:41.849 [2024-07-23 14:11:32.781805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.849 [2024-07-23 14:11:32.782160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.782170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.782575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.782994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.783023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.783427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.783662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.783691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.784078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.784536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.784564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.785008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.785451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.785480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.785922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.786359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.786388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.786776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.787209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.787238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.787679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.788142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.788171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.788656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.789060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.789090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.789581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.790016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.790053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.790392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.790762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.790791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.791258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.791665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.791703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.792133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.792522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.792552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.792977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.793451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.793481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.793859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.794111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.794140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.794582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.795050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.795080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.795501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.795913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.795942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.796373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.796758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.796786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.797258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.797712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.797740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.798160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.798548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.798577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.798973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.799409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.799437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.799826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.800301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.800330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.800617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.800902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.800931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.801165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.801546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.801574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.802013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.802456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.802485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.802873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.803311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.803340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.803706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.804109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.804139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.804582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.804968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.804996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.850 [2024-07-23 14:11:32.805451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.805729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.850 [2024-07-23 14:11:32.805738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.850 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.806095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.806436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.806465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.806926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.807376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.807404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.807810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.808195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.808225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.808617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.809027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.809064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.809476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.809875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.809904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.810386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.810739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.810751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.811094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.811438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.811449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.811802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.812091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.812102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.812453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.812798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.812809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.813100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.813444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.813456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.813732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.814071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.814082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.814369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.814788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.814799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.815134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.815358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.815369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.815788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.816085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.816095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.816427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.816827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.816838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.817122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.817458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.817469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.817658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.818090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.818102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.818398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.818742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.818752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.819174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.819616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.819626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.819980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.820414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.820425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.820703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.821100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.821111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.821538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.821889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.821902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.822250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.822594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.822604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.822984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.823414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.823425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.823760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.824108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.851 [2024-07-23 14:11:32.824121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.851 qpair failed and we were unable to recover it. 00:29:41.851 [2024-07-23 14:11:32.824422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.824842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.824852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.825136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.825513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.825523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.825899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.826249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.826262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.826773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.827064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.827076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.827420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.827722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.827735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.828140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.828434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.828446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.828799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.829101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.829113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.829457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.829800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.829813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.830124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.830295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.830308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.830605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.830893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.830904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.831196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.831541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.831552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.831904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.832303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.832315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.832654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.832991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.833002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.833298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.833633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.833644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.834063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.834334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.834345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.834699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.835072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.835083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.835533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.835802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.835812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.836242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.836588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.836599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.837039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.837478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.837491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.837847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.838192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.838204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.838580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.838954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.838965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.839317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.839653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.839665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.839973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.840368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.840379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.840716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.841075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.841087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.841421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.841762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.841774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.842067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.842489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.842500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.842799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.843226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.843237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.843667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.844099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.844110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.852 qpair failed and we were unable to recover it. 00:29:41.852 [2024-07-23 14:11:32.844537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.852 [2024-07-23 14:11:32.844885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.844898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.845264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.845648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.845660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.845959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.846294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.846305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.846689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.847122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.847136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.847558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.847802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.847815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.848153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.848517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.848528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.848977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.849378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.849389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:41.853 [2024-07-23 14:11:32.849685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.850106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:41.853 [2024-07-23 14:11:32.850118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:41.853 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.850549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.850922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.850933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.851379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.851796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.851807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.852233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.852762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.852776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.853252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.853677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.853688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.854039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.854506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.854519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.854957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.855313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.855348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.855816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.856308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.856339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.856740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.857090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.857100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.857530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.857969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.857998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.858479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.858914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.858942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.859345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.859732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.859760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.860224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.860563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.860592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.861092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.861486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.861514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.861988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.862388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.862420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.862953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.863373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.863412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.863789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.864347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.864379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.864872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.865317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.865347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.865745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.866156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.866187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.866690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.867124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.867155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.867564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.867992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.868002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.868345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.868686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.868696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.119 [2024-07-23 14:11:32.869060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.869458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.119 [2024-07-23 14:11:32.869489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.119 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.869955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.870400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.870431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.870812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.871312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.871342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.871746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.872147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.872178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.872583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.872975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.872986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.873356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.873816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.873845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.874300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.874695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.874725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.875133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.875604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.875633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.876136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.876509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.876538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.876917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.877335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.877345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.877732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.878160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.878192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.878584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.879059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.879090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.879513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.879854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.879884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.880378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.880817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.880845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.881339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.881752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.881781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.882126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.882588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.882617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.883070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.883466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.883496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.883885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.884267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.884297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.884692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.885147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.885176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.885645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.886097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.886126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.886546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.886982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.887011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.887482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.887944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.887973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.888341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.888737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.888766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.889370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.889739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.889749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.890097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.890463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.890497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.890831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.891209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.891238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.891678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.892146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.892176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.892557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.893019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.893029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.893377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.893748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.120 [2024-07-23 14:11:32.893758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.120 qpair failed and we were unable to recover it. 00:29:42.120 [2024-07-23 14:11:32.894158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.894566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.894595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.895097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.895550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.895578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.896092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.896536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.896565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.897094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.897498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.897526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.898040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.898474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.898503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.898890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.899315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.899344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.899812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.900222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.900253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.900648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.901262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.901292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.901716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.902148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.902178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.902587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.903070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.903099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.903538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.903940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.903969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.904410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.904801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.904829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.905295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.905609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.905638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.906053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.906497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.906526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.906943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.907400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.907430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.907748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.908222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.908253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.908596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.908984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.909014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.909485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.910089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.910118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.910531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.910943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.910972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.911370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.911755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.911765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.912183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.912491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.912500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.912879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.913352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.913382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.913725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.914163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.914193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.914637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.915038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.915077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.915508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.916016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.916057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.916513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.916896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.916925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.121 qpair failed and we were unable to recover it. 00:29:42.121 [2024-07-23 14:11:32.917385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.121 [2024-07-23 14:11:32.917875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.917903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.918355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.918819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.918848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.919240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.919644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.919672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.920116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.920565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.920593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.921056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.921476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.921505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.922017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.922375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.922405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.922796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.923218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.923248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.923719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.924124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.924154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.924644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.925029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.925068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.925514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.925978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.925987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.926336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.926689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.926718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.927137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.927604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.927633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.928041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.928426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.928456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.928914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.929330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.929360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.929850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.930200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.930230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.930636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.931131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.931162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.931587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.931918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.931947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.932394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.932870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.932900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.933297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.933752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.933781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.934274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.934663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.934692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.935154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.935645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.935673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.936077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.936470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.936499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.936887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.937347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.937378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.937825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.938288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.938319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.938712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.939109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.939140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.939469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.939807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.939835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.940306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.940652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.940680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.941054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.941411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.941440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.122 qpair failed and we were unable to recover it. 00:29:42.122 [2024-07-23 14:11:32.941874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.942279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.122 [2024-07-23 14:11:32.942309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.942836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.943217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.943248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.943696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.944160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.944190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.944669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.945127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.945158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.945500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.945895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.945924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.946402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.946815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.946844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.947308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.947751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.947787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.948209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.948587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.948616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.949112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.949469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.949498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.949886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.950357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.950388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.950791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.951242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.951272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.951793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.952297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.952328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.952809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.953207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.953236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.953705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.954084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.954114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.954537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.954922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.954950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.955377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.955844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.955874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.956324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.956699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.956728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.957067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.957466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.957495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.957965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.958375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.958405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.958753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.959219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.959249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.959651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.960103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.960132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.960535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.961022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.961076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.961483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.961939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.961949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.123 qpair failed and we were unable to recover it. 00:29:42.123 [2024-07-23 14:11:32.962327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.123 [2024-07-23 14:11:32.962708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.962717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.963192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.963579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.963608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.964060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.964506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.964535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.964945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.965348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.965379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.965784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.966254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.966264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.966566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.967014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.967054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.967400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.967875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.967909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.968408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.968806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.968834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.969311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.969667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.969696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.970023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.970475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.970507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.970864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.971220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.971250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.971644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.972130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.972160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.972634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.973027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.973065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.973475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.973934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.973962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.974418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.974886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.974916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.975326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.975725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.975754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.976140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.976576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.976610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.977104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.977443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.977471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.977894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.978336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.978365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.978790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.979200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.979229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.979634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.980089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.980120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.980433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.980765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.980795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.981243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.981585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.981614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.982086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.982506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.982535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.982995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.983407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.983436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.983790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.984185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.984216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.124 qpair failed and we were unable to recover it. 00:29:42.124 [2024-07-23 14:11:32.984668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.985150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.124 [2024-07-23 14:11:32.985163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.985595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.986005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.986034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.986446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.986754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.986782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.987242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.987629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.987657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.988129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.988547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.988577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.989077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.989427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.989457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.989911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.990298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.990310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.990613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.990993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.991022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.991452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.991799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.991828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.992233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.992621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.992650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.993131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.993565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.993599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.994124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.994590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.994619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.995128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.995487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.995516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.995987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.996428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.996439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.996743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.997172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.997202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.997610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.998129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.998160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.998560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.998970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.998999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:32.999483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.999947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:32.999976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.000370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.000802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.000831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.001307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.001637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.001666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.002190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.002605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.002633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.003060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.003478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.003507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.003977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.004356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.004387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.004794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.005240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.005269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.005667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.006086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.006117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.006568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.006972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.007011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.007391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.007782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.007811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.008325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.008671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.008700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.125 qpair failed and we were unable to recover it. 00:29:42.125 [2024-07-23 14:11:33.009185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.009529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.125 [2024-07-23 14:11:33.009558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.010106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.010450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.010479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.010891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.011384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.011415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.011829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.012323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.012355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.012828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.013208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.013239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.013752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.014204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.014216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.014637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.015118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.015148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.015520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.015914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.015944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.016567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.017031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.017069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.017491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.018872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.018903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.019380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.019815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.019848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.020283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.021054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.021073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.021510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.021894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.021906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.022375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.022702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.022714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.023083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.023469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.023480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.023878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.024326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.024338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.024838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.025231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.025242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.025618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.026102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.026112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.026500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.026863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.026900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.027306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.027703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.027733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.028193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.028494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.028505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.028888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.029302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.029312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.029608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.030066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.030077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.030478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.030782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.030792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.031226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.031605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.031634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.032031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.032476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.032506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.033068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.033518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.033547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.126 qpair failed and we were unable to recover it. 00:29:42.126 [2024-07-23 14:11:33.033894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.126 [2024-07-23 14:11:33.034310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.034341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.034828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.035326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.035337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.035773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.036181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.036212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.036686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.037105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.037135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.037474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.037985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.038014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.038445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.038791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.038821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.039286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.039636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.039666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.040126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.040531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.040560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.041070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.041437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.041467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.041820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.042356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.042386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.042792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.043256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.043295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.043672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.044158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.044189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.044649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.045111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.045141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.045595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.046135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.046166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.046515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.046984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.047013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.047384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.047796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.047826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.048257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.048667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.048696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.049101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.049555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.049585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.050077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.050492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.050522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.051021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.051539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.051569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.052126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.052557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.052586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.053154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.053637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.053666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.054152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.054604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.054632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.055088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.055562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.055591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.056079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.056440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.056468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.056830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.057249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.057280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.057731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.058271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.058302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.058732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.059209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.059239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.127 [2024-07-23 14:11:33.059643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.060094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.127 [2024-07-23 14:11:33.060135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.127 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.060494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.060863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.060876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.061186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.061502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.061515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.061968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.062356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.062368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.062786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.063228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.063240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.063641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.064072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.064084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.064409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.064782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.064793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.065230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.065650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.065677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.066119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.066484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.066496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.066867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.067193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.067205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.067552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.067918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.067930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.068348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.068769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.068781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.069242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.069660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.069673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.070201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.070513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.070526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.070976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.071352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.071364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.071680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.072057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.072070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.072456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.072766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.072777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.073220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.073583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.073595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.073887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.074256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.074268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.074588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.074992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.075003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.075370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.075732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.075743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.076169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.076589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.076601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.077106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.077600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.077612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.128 qpair failed and we were unable to recover it. 00:29:42.128 [2024-07-23 14:11:33.078022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.128 [2024-07-23 14:11:33.078363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.078376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.078823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.079251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.079263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.079702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.080142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.080155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.080557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.081005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.081017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.081505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.081975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.081988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.082395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.082842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.082855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.083307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.083745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.083758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.084119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.084492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.084505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.084872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.085360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.085374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.085755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.086176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.086191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.086608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.087078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.087091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.087525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.087955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.087969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.088283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.088598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.088610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.089054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.089359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.089371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.089692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.090041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.090066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.090430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.090877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.090889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.091311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.091747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.091761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.092116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.092603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.092616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.092980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.093399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.093414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.093836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.094271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.094284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.094633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.095034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.095054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.095414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.095839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.095850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.096291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.096656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.096668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.097032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.097526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.097538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.098010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.098427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.098440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.098750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.099173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.099188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.099655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.100141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.100152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.129 qpair failed and we were unable to recover it. 00:29:42.129 [2024-07-23 14:11:33.100457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.129 [2024-07-23 14:11:33.100894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.100906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.101347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.101663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.101674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.102056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.102423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.102434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.102838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.103284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.103296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.103665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.104036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.104058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.104460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.104873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.104884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.105287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.105677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.105706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.106092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.106563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.106591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.107062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.107486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.107522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.107986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.108436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.108467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.108865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.109330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.109363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.109769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.110163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.110195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.110614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.111032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.111077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.111477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.111868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.111897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.112349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.112816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.112844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.113321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.113741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.113770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.114373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.114814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.114842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.115362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.115717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.115747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.116207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.116666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.116678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.117092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.117571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.117599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.118066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.118517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.118546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.118978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.119423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.119454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.119891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.120331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.120362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.120753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.121219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.121251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.121603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.122125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.122155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.122483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.122832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.122861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.123382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.123882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.123911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.124305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.124772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.124801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.130 qpair failed and we were unable to recover it. 00:29:42.130 [2024-07-23 14:11:33.125280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.130 [2024-07-23 14:11:33.125673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.125686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-07-23 14:11:33.126131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.126436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.126466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-07-23 14:11:33.126921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.127334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.127364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-07-23 14:11:33.127769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.128128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.128158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-07-23 14:11:33.128585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.129015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.129070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.131 [2024-07-23 14:11:33.129420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.129780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.131 [2024-07-23 14:11:33.129790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.131 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.130165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.130463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.130473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.130781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.131184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.131195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.131505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.131859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.131869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.132282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.132644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.132653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.133076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.133483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.133512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.134038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.134503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.134531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.135023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.135384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.135414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.135813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.136277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.136307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.136640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.137094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.137125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.137473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.137939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.137967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.138368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.138763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.138791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.139176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.139484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.139512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.139945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.140415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.140446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.140801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.141201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.141232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.141577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.141984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.142013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.142452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.142845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.142874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.143279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.143665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.143693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.144091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.144488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.144517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.144953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.145366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.145406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.145727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.146118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.146148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.146547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.147008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.147037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.147520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.147918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.397 [2024-07-23 14:11:33.147947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.397 qpair failed and we were unable to recover it. 00:29:42.397 [2024-07-23 14:11:33.148363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.148700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.148729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.149311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.149732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.149761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.150221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.150575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.150585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.151057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.151465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.151475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.151938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.152387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.152417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.152815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.153218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.153249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.153592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.154001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.154030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.154397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.154837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.154866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.155269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.155659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.155687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.156156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.156600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.156628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.157119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.157540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.157568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.158156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.158591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.158619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.159092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.159504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.159533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.159984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.160401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.160431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.160849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.161274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.161305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.161704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.162111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.162141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.162593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.162984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.163013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.163470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.163939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.163967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.164417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.164895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.164923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.165409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.165801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.165829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.166237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.166580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.166609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.167077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.167470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.167498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.167965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.168360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.168371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.168778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.169143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.169173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.169511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.169931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.169959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.170346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.170698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.170726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.171220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.171601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.171629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.172100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.172495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.398 [2024-07-23 14:11:33.172524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.398 qpair failed and we were unable to recover it. 00:29:42.398 [2024-07-23 14:11:33.172833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.173274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.173304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.173750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.174253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.174264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.174629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.175097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.175127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.175480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.175866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.175875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.176222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.176534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.176564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.177076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.177470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.177498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.177943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.178359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.178390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.178791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.179191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.179221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.179569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.179889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.179917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.180299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.180741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.180770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.181253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.181661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.181689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.182189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.182523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.182551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.183000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.183406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.183437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.183779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.184234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.184276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.184659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.185007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.185036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.185465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.185861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.185871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.186162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.186468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.186497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.186999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.187440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.187470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.187925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.188390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.188429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.188743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.189121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.189150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.189552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.189970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.189999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.190413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.190762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.190791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.191243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.191646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.191676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.192166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.192522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.192551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.193016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.193432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.193461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.193981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.194370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.194400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.194783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.195205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.195235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.195578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.195952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.195981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.399 qpair failed and we were unable to recover it. 00:29:42.399 [2024-07-23 14:11:33.196500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.399 [2024-07-23 14:11:33.196960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.196988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.197441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.197860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.197889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.198313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.198710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.198738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.199201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.199547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.199576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.199963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.200371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.200402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.200901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.201378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.201408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.201831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.202301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.202331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.202755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.203242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.203272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.203778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.204176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.204218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.204621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.205025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.205073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.205533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.205980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.206009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.206459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.206897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.206925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.207341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.207744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.207773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.208196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.208630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.208659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.209075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.209494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.209525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.210028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.210473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.210503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.211006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.211415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.211445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.211869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.212333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.212363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.212769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.213241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.213271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.213674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.214105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.214136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.214582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.215067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.215097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.215504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.215942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.215972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.216429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.216926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.216956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.217437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.217845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.217875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.218220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.218619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.218648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.219059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.219413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.219441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.219843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.220147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.220158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.400 qpair failed and we were unable to recover it. 00:29:42.400 [2024-07-23 14:11:33.220611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.221019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.400 [2024-07-23 14:11:33.221066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.221474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.221805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.221834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.222247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.222602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.222631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.223126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.223509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.223539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.223945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.224424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.224455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.225052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.225516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.225545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.226004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.226451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.226482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.226968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.227474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.227504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.227867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.228320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.228352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.228850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.229351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.229381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.229752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.230235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.230267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.230671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.231126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.231156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.231576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.232082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.232123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.232624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.233095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.233125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.233585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.234077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.234107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.234529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.235056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.235087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.235512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.235997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.236026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.236508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.237006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.237036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.237465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.237877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.237906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.238386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.238745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.238780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.239139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.239510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.239523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.239890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.240339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.240380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.401 qpair failed and we were unable to recover it. 00:29:42.401 [2024-07-23 14:11:33.240758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.401 [2024-07-23 14:11:33.241239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.241269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.241606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.242073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.242103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.242625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.243023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.243064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.243504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.243986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.244015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.244535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.244886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.244914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.245376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.245782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.245811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.246220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.246689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.246729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.247090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.247527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.247557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.248092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.248564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.248599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.249125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.249636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.249670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.250124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.250577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.250607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.250955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.251363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.251394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.251817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.252232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.252262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.252674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.253158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.253189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.253594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.254060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.254090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.254509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.254958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.254987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.255431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.255858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.255887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.256531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.256929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.256959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.257376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.257797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.257832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.258265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.258736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.258765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.259180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.259588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.259616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.260022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.260447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.260477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.260901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.261326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.261357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.261725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.262198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.262228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.262715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.263130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.263160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.263598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.264019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.264057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.264479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.264951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.264981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.402 qpair failed and we were unable to recover it. 00:29:42.402 [2024-07-23 14:11:33.265448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.265939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.402 [2024-07-23 14:11:33.265968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.266431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.266787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.266799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.267211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.267640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.267668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.268079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.268482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.268519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.268951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.269261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.269292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.269703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.270173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.270203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.270576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.271040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.271079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.271489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.271889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.271919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.272335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.272824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.272853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.273361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.273810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.273838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.274309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.274774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.274803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.275409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.275848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.275877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.276480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.276880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.276890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.277259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.277676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.277687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.278110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.278535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.278564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.279092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.279508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.279537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.279973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.280431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.280462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.280877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.281352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.281383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.281870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.282403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.282433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.282834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.283282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.283312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.283711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.284164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.284195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.284607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.284959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.284988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.285499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.285900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.285930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.286365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.286778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.286807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.287276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.287683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.287713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.288202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.288631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.288660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.289202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.289559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.289588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.290068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.290517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.403 [2024-07-23 14:11:33.290554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.403 qpair failed and we were unable to recover it. 00:29:42.403 [2024-07-23 14:11:33.290985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.291384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.291415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.291824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.292224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.292254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.292738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.293119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.293149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.293556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.294102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.294133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.294528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.295020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.295057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.295511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.295854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.295883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.296394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.296797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.296826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.297309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.297740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.297769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.298269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.298665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.298694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.299110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.299537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.299565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.300064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.300526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.300554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.301078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.301522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.301551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.301982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.302479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.302509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.302951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.303413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.303442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.303842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.304312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.304342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.304752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.305239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.305269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.305725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.306198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.306229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.306711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.307170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.307200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.307656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.308146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.308176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.308699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.309163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.309193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.309623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.310103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.310134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.310628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.311066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.311096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.311568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.311956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.311967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.312400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.312750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.312763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.313151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.313596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.313610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.314005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.314380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.314392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.314841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.315288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.315300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.404 [2024-07-23 14:11:33.315686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.316041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.404 [2024-07-23 14:11:33.316061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.404 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.316414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.316718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.316730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.317149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.317573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.317584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.317978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.318399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.318411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.318828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.319235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.319247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.319608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.319917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.319928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.320363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.320831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.320842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.321322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.321671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.321684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.322152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.322572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.322585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.323048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.323406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.323419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.323841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.324257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.324269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.324690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.325147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.325161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.325603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.326049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.326061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.326480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.326899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.326910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.327277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.327718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.327730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.328083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.328559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.328570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.328939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.329317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.329329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.329361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1254200 (9): Bad file descriptor 00:29:42.405 [2024-07-23 14:11:33.329892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.330255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.330276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.330735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.331204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.331220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.331741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.332221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.332235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.332634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.333065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.333080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.333452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.333776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.333789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.334233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.334676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.334690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.335108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.335531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.335556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.336000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.336421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.336435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.336873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.337311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.337324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.337767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.338205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.338219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.405 [2024-07-23 14:11:33.338644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.339080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.405 [2024-07-23 14:11:33.339094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.405 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.339540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.339959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.339989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.340471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.340957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.340986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.341529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.342008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.342036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.342532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.342918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.342946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.343454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.343946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.343975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.344490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.344861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.344890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.345375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.345867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.345897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.346411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.346883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.346912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.347328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.347773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.347802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.348307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.348697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.348726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.349120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.349567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.349595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.350057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.350532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.350561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.351011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.351463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.351492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.351989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.352459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.352489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.352886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.353293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.353308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.353748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.354164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.354194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.354703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.355102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.355133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.355533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.355946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.355976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.356384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.356860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.356889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.357422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.357856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.357885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.358361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.358829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.358857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.406 [2024-07-23 14:11:33.359258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.359735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.406 [2024-07-23 14:11:33.359764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.406 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.360233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.360667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.360697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.361108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.361582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.361611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.362037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.362567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.362596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.363027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.363500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.363530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.364032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.364438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.364467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.364939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.365405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.365436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.365907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.366380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.366410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.366956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.367358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.367388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.367844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.368292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.368322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.368778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.369199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.369229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.369683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.370156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.370186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.370673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.371146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.371176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.371660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.372152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.372166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.372577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.373077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.373107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.373619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.374036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.374076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.374550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.374971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.375000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.375506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.375853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.375882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.376367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.376845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.376874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.377300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.377797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.377826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.378367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.378765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.378794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.379196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.379611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.379651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.380080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.380565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.380595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.381077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.381480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.381509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.381928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.382394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.382424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.382831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.383304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.383335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.383852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.384241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.384265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.384656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.385030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.407 [2024-07-23 14:11:33.385096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.407 qpair failed and we were unable to recover it. 00:29:42.407 [2024-07-23 14:11:33.385522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.385998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.386028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.386495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.386894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.386923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.387380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.387832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.387860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.388360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.388811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.388840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.389341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.389741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.389770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.390250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.390750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.390778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.391293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.391794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.391823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.392344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.392844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.392872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.393382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.393753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.393781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.394270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.394664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.394692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.395172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.395626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.395655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.396113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.396584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.396613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.397066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.397542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.397570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.398060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.398461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.398491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.398967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.399443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.399473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.399912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.400396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.400426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.400933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.401413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.401443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.401907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.402374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.402388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.402834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.403239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.403254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.403682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.404082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.404096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.404529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.404976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.404990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.405432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.405878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.405892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.406340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.406843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.406872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.408 [2024-07-23 14:11:33.407366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.407825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.408 [2024-07-23 14:11:33.407854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.408 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.408356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.408806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.408819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.409206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.409569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.409583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.410008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.410489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.410520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.410908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.411395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.411425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.411877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.412305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.412335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.412753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.413224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.413254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.413653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.414131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.414161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.414571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.415071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.415102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.415583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.415982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.416011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.416498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.416975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.417005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.417537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.417991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.418020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.418530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.418983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.419012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.419545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.419939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.419968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.420357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.420802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.420815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.421261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.421736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.421765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.422205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.422666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.422695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.423177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.423680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.673 [2024-07-23 14:11:33.423710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.673 qpair failed and we were unable to recover it. 00:29:42.673 [2024-07-23 14:11:33.424231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.424740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.424769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.425271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.425745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.425774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.426262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.426760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.426789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.427316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.427734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.427763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.428239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.428698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.428712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.429163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.429636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.429665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.430062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.430463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.430492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.430900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.431348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.431377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.431886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.432341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.432372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.432881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.433340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.433371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.433883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.434362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.434393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.434880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.435332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.435362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.435765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.436241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.436271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.436760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.437254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.437268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.437694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.438056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.438071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.438539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.438966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.438995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.439502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.439882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.439911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.440413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.440906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.440936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.441471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.441849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.441877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.442360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.442765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.442799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.443256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.443731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.443759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.444240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.444638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.444666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.445069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.445505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.445534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.446019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.446526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.446556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.447054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.447475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.447505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.447965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.448448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.448479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.449006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.449474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.449505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.674 qpair failed and we were unable to recover it. 00:29:42.674 [2024-07-23 14:11:33.449928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.450384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.674 [2024-07-23 14:11:33.450414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.450921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.451412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.451443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.451879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.452351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.452387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.452925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.453304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.453335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.453790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.454262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.454292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.454750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.455220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.455250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.455708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.456177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.456208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.456668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.457146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.457176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.457632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.458104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.458134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.458534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.458950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.458979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.459493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.459922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.459951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.460362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.460815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.460844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.461254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.461654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.461689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.462171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.462622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.462651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.463161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.463576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.463605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.464082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.464491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.464519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.465003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.465476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.465506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.465935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.466394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.466424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.466835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.467234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.467248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.467632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.468074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.468089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.468538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.468984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.469014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.469439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.469917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.469946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.470415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.470819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.470857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.471296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.471776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.471804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.472289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.472788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.472817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.473327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.473756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.473786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.474263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.474739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.474769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.475255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.475734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.475763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.675 [2024-07-23 14:11:33.476189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.476687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.675 [2024-07-23 14:11:33.476716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.675 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.477241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.477662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.477677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.478071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.478460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.478490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.478949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.479380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.479395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.479821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.480245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.480276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.480759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.481237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.481267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.481748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.482146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.482176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.482630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.483109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.483146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.483517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.483952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.483982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.484490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.484968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.484996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.485527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.485926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.485955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.486433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.486938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.486967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.487445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.487846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.487875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.488312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.488726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.488740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.489104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.489493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.489522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.490011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.490519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.490549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.491064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.491558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.491587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.492005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.492512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.492546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.493017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.493532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.493562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.494004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.494438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.494468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.494953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.495425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.495456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.495943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.496340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.496370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.496786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.497269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.497300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.497722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.498194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.498232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.498598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.499005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.499034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.499523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.499933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.499962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.500377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.500832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.500861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.501282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.501780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.501809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.502328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.502720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.676 [2024-07-23 14:11:33.502749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.676 qpair failed and we were unable to recover it. 00:29:42.676 [2024-07-23 14:11:33.503228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.503701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.503730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.504215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.504609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.504639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.505061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.505475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.505488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.505861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.506321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.506351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.506807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.507281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.507312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.507732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.508129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.508160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.508655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.509108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.509139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.509642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.510091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.510122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.510576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.510997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.511027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.511530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.512009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.512038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.512571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.512990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.513019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.513515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.513996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.514025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.514514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.514969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.514998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.515509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.515960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.515989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.516503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.516985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.517014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.517533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.518033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.518074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.518587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.519057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.519087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.519567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.520053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.520084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.520609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.521108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.521139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.521631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.522102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.522132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.522613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.523065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.523095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.523507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.523980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.524019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.524388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.524882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.524911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.525396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.525792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.525822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.526181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.526595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.526624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.527103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.527554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.527583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.677 qpair failed and we were unable to recover it. 00:29:42.677 [2024-07-23 14:11:33.528071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.677 [2024-07-23 14:11:33.528547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.528576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.529034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.529517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.529546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.529968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.530387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.530417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.530849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.531332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.531362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.531893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.532393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.532423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.532804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.533278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.533309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.533738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.534220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.534251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.534778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.535277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.535307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.535743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.536235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.536266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.536805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.537302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.537332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.537842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.538294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.538325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.538780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.539256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.539296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.539727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.540174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.540204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.540695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.541086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.541116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.541601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.542100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.542131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.542555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.543033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.543072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.543597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.544062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.544092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.544553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.545013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.545058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.545579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.546066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.546097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.546540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.546948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.546977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.547386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.547776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.547805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.548282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.548795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.548809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.549239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.549712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.549741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.550213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.550692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.550721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.551157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.551572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.551602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.678 [2024-07-23 14:11:33.552065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.552538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.678 [2024-07-23 14:11:33.552568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.678 qpair failed and we were unable to recover it. 00:29:42.679 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 3435923 Killed "${NVMF_APP[@]}" "$@" 00:29:42.679 [2024-07-23 14:11:33.552972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 14:11:33 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:29:42.679 14:11:33 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:42.679 [2024-07-23 14:11:33.553441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.553456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 14:11:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:42.679 14:11:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:42.679 14:11:33 -- common/autotest_common.sh@10 -- # set +x 00:29:42.679 [2024-07-23 14:11:33.553949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.554400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.554415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.554869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.555256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.555271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.555651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.556103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.556117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.556540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.556903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.556916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.557373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.557810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.557825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.558273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 14:11:33 -- nvmf/common.sh@469 -- # nvmfpid=3436811 00:29:42.679 14:11:33 -- nvmf/common.sh@470 -- # waitforlisten 3436811 00:29:42.679 [2024-07-23 14:11:33.558653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.558667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 14:11:33 -- common/autotest_common.sh@819 -- # '[' -z 3436811 ']' 00:29:42.679 14:11:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:42.679 14:11:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:42.679 14:11:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:42.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:42.679 14:11:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:42.679 [2024-07-23 14:11:33.559120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 14:11:33 -- common/autotest_common.sh@10 -- # set +x 00:29:42.679 14:11:33 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:42.679 [2024-07-23 14:11:33.559543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.559559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.560041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.560511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.560526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.560976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.561497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.561512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.561966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.562415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.562430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.562814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.563214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.563229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.563661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.564027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.564041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.564494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.564812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.564826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.565184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.565578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.565593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.566060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.566556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.566571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.566975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.567360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.567389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.567827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.568198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.568212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.568639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.569061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.569077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.569448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.569849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.569863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.570297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.570720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.570735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.571268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.571755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.571772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.572095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.572540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.572554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.572976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.573565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.573580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.679 [2024-07-23 14:11:33.573998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.574315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.679 [2024-07-23 14:11:33.574327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.679 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.574771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.575137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.575152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.575609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.576070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.576083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.576538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.576915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.576928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.577314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.577699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.577711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.578166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.578605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.578618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.579049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.579412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.579424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.579871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.580315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.580327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.580735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.581167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.581179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.581618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.581991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.582004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.582400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.582840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.582851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.583263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.583660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.583674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.584069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.584506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.584518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.584966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.585416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.585430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.585821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.586238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.586252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.586686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.586982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.586994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.587425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.587902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.587927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.588296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.588767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.588797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.589125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.589580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.589592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.589944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.590301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.590312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.590721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.591082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.591093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.591513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.591870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.591880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.592157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.592588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.592599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.592871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.593218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.593230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.593710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.594163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.594174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.594609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.595041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.595065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.595445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.595853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.595863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.596296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.596685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.596696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.596992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.597353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.597363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.680 qpair failed and we were unable to recover it. 00:29:42.680 [2024-07-23 14:11:33.597803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.680 [2024-07-23 14:11:33.598256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.598267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.598679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.599112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.599123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.599531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.599911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.599922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.600299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.600725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.600735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.601100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.601479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.601489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.601850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.602188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.602200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.602557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.602911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.602921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.603283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.603642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.603652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.603929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.604294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.604305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.604443] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:42.681 [2024-07-23 14:11:33.604486] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:42.681 [2024-07-23 14:11:33.604644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.604927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.604936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.605347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.605685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.605695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.606053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.606418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.606429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.606780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.607136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.607146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.607501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.607866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.607876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.608313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.608732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.608741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.609027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.609451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.609462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.609871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.610256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.610286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.610673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.610968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.610980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.611410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.611767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.611778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.612184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.612546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.612557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.612857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.613215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.613226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.613562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.613912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.613922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.614349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.614756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.614766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.615144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.615503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.615514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.615873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.616321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.616332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.616630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.617059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.617070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.617448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.617829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.617839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.681 qpair failed and we were unable to recover it. 00:29:42.681 [2024-07-23 14:11:33.618270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.681 [2024-07-23 14:11:33.618618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.618628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.619069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.619513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.619522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.619790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.620146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.620156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.620514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.620939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.620948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.621371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.621774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.621784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.622214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.622561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.622571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.622929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.623347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.623357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.623716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.624136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.624147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.624571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.624925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.624935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.625387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.625751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.625761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.626063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.626416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.626426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.626779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.627227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.627238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.627578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.628023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.628033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.628173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.628575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.628585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.628876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.629222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.629233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.629589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.629989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.629999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.630436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.630775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.630785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.631224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.631644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.631655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.632021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.632373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.632383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 EAL: No free 2048 kB hugepages reported on node 1 00:29:42.682 [2024-07-23 14:11:33.632806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.633150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.633161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.633564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.633909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.633919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.634281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.634705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.634715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.635130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.635453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.635482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.635924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.636331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.636341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.682 [2024-07-23 14:11:33.636750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.637112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.682 [2024-07-23 14:11:33.637123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.682 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.637420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.637790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.637800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.638173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.638599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.638609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.639038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.639391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.639401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.639753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.640050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.640060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.640503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.640794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.640803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.641235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.641660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.641670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.642106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.642465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.642475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.642877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.643300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.643310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.643735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.643871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.643880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.644228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.644628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.644648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.645049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.645491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.645501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.645867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.646291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.646301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.646729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.647133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.647144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.647479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.647828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.647838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.648122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.648474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.648484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.648827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.649241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.649251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.649444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.649779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.649789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.650134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.650484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.650494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.650943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.651379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.651389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.651725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.652017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.652027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.652382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.652730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.652740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.653175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.653575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.653585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.653994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.654342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.654352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.654766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.655188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.655199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.655501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.655849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.655859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.656223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.656626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.656643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.657059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.657280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.657289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.657636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.657990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.683 [2024-07-23 14:11:33.658000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.683 qpair failed and we were unable to recover it. 00:29:42.683 [2024-07-23 14:11:33.658448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.658809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.658819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.659195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.659544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.659554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.659856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.660263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.660273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.660703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.661037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.661049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.661346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.661765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.661775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.662204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.662498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.662508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.662862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.663218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.663228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.663596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.664035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.664049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.664497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.664920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.664931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.665234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.665424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.665435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.665863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.666288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.666299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.666596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.666891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.666902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.667239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.667664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.667674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.668122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.668419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.668430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.668782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.669133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.669144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.669569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.669908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.669920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.670272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.670529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.670538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.670872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.671205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.671215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.671571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.672012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.672021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.672422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.672854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.672864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.673217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.673506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.673516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.673845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.674243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.674254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.674586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.674988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.674998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.675346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.675697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.675707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.676073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.676069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:42.684 [2024-07-23 14:11:33.676444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.676455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.676829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.677084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.677094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.677436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.677838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.677849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.678280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.678716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.678727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.684 qpair failed and we were unable to recover it. 00:29:42.684 [2024-07-23 14:11:33.679038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.684 [2024-07-23 14:11:33.679454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.679465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.679753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.680034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.680049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.680451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.680849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.680860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.680987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.681416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.681427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.681855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.682211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.682222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.682575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.682922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.682933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.683358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.683806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.685 [2024-07-23 14:11:33.683817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.685 qpair failed and we were unable to recover it. 00:29:42.685 [2024-07-23 14:11:33.684169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.684574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.684586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.685009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.685437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.685449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.685851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.686271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.686282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.686621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.686993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.687003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.687433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.687780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.687790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.688201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.688585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.688595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.689036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.689443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.689453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.689781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.690083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.690093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.954 qpair failed and we were unable to recover it. 00:29:42.954 [2024-07-23 14:11:33.690493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.954 [2024-07-23 14:11:33.690917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.690927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.691284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.691542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.691552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.691899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.692322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.692332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.692682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.692964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.692975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.693338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.693689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.693699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.694100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.694379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.694389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.694814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.695185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.695195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.695559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.695955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.695966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.696311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.696714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.696724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.697069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.697422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.697432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.697870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.698224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.698235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.698658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.699051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.699062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.699399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.699796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.699806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.700218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.700565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.700577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.700920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.701265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.701275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.701627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.702026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.702036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.702455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.702785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.702795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.703148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.703490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.703500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.703777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.704176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.704186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.704552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.704968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.704978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.705335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.705683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.705692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.706049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.706335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.706344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.706634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.707028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.707037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.707310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.707644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.707654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.708078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.708409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.708418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.708817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.709270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.709280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.709623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.710022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.710032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.710384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.710583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.955 [2024-07-23 14:11:33.710593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.955 qpair failed and we were unable to recover it. 00:29:42.955 [2024-07-23 14:11:33.711014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.711347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.711358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.711700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.712031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.712041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.712326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.712679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.712689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.713008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.713344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.713355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.713775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.714227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.714243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.714670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.715097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.715112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.715466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.715913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.715927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.716369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.716749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.716761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.717202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.717623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.717633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.717968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.718394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.718406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.718813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.719158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.719170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.719593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.720035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.720048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.720399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.720791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.720803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.721167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.721578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.721590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.722029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.722407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.722418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.722777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.723131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.723142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.723503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.723856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.723866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.724226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.724581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.724590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.724987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.725409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.725419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.725784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.726144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.726154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.726579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.726998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.727007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.727358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.727707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.727717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.728121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.728499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.728510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.728909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.729189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.729199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.729603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.730013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.730023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.730444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.730858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.730868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.956 qpair failed and we were unable to recover it. 00:29:42.956 [2024-07-23 14:11:33.731291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.956 [2024-07-23 14:11:33.731659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.731668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.732021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.732380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.732391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.732726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.733067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.733077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.733502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.733872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.733881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.734292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.734623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.734632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.734983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.735323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.735333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.735667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.736071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.736082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.736390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.736769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.736779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.737157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.737525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.737535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.737934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.738260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.738270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.738650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.739092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.739102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.739451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.739864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.739874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.740296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.740692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.740701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.740812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.741163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.741174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.741456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.741888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.741898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.742319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.742681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.742691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.743089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.743522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.743531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.743951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.744352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.744362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.744649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.744998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.745008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.745304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.745631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.745641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.745985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.746324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.746334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.746731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.747177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.747189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.747544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.747894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.747904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.748246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.748486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.748495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.748920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.749325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.749335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.749684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.750050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.750061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.957 qpair failed and we were unable to recover it. 00:29:42.957 [2024-07-23 14:11:33.750488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.957 [2024-07-23 14:11:33.750554] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:42.957 [2024-07-23 14:11:33.750654] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.957 [2024-07-23 14:11:33.750662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.958 [2024-07-23 14:11:33.750669] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.958 [2024-07-23 14:11:33.750773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:42.958 [2024-07-23 14:11:33.750913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.750923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.750882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:42.958 [2024-07-23 14:11:33.750987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:42.958 [2024-07-23 14:11:33.751341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.750988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:42.958 [2024-07-23 14:11:33.751686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.751697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.752118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.752479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.752489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.752937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.753378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.753388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.753809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.754216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.754226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.754643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.755071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.755081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.755508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.755926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.755936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.756362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.756761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.756771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.757171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.757600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.757610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.758010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.758407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.758418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.758838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.759263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.759274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.759672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.760098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.760108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.760460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.760915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.760926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.761352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.761715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.761726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.762151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.762549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.762560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.762986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.763404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.763416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.763769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.764195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.764208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.764554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.764890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.764901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.765247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.765650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.765660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.766077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.766484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.766497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.766821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.767190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.767202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.767624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.768007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.768019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.768439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.768780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.768793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.769219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.769647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.769663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.770076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.770485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.770497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.958 qpair failed and we were unable to recover it. 00:29:42.958 [2024-07-23 14:11:33.770911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.771318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.958 [2024-07-23 14:11:33.771329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.771666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.772028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.772039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.772465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.772877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.772889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.773320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.773686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.773696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.774050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.774445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.774457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.774856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.775258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.775269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.775563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.775984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.775995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.776416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.776841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.776851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.777268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.777671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.777686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.778050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.778472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.778482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.778902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.779328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.779339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.779763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.780163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.780174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.780548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.780935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.780946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.781346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.781709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.781720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.782118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.782539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.782549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.782898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.783245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.783256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.783621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.784048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.784059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.784485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.784821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.784832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.785243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.785592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.785606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.785960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.786409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.786421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.786821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.787243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.787254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.787677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.788102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.788111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.959 qpair failed and we were unable to recover it. 00:29:42.959 [2024-07-23 14:11:33.788531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.788875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.959 [2024-07-23 14:11:33.788885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.789312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.789739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.789748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.790147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.790547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.790556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.790924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.791358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.791368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.791794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.792218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.792229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.792588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.793011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.793021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.793419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.793847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.793859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.794279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.794648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.794660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.795082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.795482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.795492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.795939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.796223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.796233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.796632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.797079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.797090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.797458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.797859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.797870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.798214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.798637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.798648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.799067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.799470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.799481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.799926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.800367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.800379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.800804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.801237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.801247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.801619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.802048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.802060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.802463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.802837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.802848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.803269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.803672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.803682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.804104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.804504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.804515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.804872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.805278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.805289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.805567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.805900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.805911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.806332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.806733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.806743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.807165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.807567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.807577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.808001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.808341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.808351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.960 [2024-07-23 14:11:33.808749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.809166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.960 [2024-07-23 14:11:33.809176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.960 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.809526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.809951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.809961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.810384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.810735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.810745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.811088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.811519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.811533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.811905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.812352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.812368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.812818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.813187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.813198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.813512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.813949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.813964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.814313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.814671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.814682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.815097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.815503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.815515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.815938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.816346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.816360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.816771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.817189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.817200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.817628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.817987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.817997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.818307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.818657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.818668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.819026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.819348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.819363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.819768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.820190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.820201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.820552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.820922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.820933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.821388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.821738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.821751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.822155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.822448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.822459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.822880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.823308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.823320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.823933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.824312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.824326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.824679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.825027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.825038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.825470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.825883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.825896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.826305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.826720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.826731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.827149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.827564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.827575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.961 [2024-07-23 14:11:33.827933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.828284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.961 [2024-07-23 14:11:33.828297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.961 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.828593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.829017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.829028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.829466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.829871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.829883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.830308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.830659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.830671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.831088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.831465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.831476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.831773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.832135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.832147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.832417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.832838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.832848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.833205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.833552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.833562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.833992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.834370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.834381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.834753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.835193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.835203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.835603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.835971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.835980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.836405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.836825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.836835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.837099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.837496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.837506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.837945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.838358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.838368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.838770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.839183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.839194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.839496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.839833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.839843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.840241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.840804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.840814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.841145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.841473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.841483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.841854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.842204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.842214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.842493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.842639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.842649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.843051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.843405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.843415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.843764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.844038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.844054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.844349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.844678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.844688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.844984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.845389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.845399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.845799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.846140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.846150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.846504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.846957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.846967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.847341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.847639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.847649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.962 qpair failed and we were unable to recover it. 00:29:42.962 [2024-07-23 14:11:33.848099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.962 [2024-07-23 14:11:33.848498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.848508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.848963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.849358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.849378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.849678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.850023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.850037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.850423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.850851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.850865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.851241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.851553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.851567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.851995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.852347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.852361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.852772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.853129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.853144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.853591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.854054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.854068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.854484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.854912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.854926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.855283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.855695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.855708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.856090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.856464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.856479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.856850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.857086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.857101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.857455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.857862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.857875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.858301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.858733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.858746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.859120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.859468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.859481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.859776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.860177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.860190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.860597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.860887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.860901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.861254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.861644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.861657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.862033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.862418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.862432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.862792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.863224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.863237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.863602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.864028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.864041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.864481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.864779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.864795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.865114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.865603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.865617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.865957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.866412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.866425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.866569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.866993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.867006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.867356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.867728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.963 [2024-07-23 14:11:33.867741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.963 qpair failed and we were unable to recover it. 00:29:42.963 [2024-07-23 14:11:33.868047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.868451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.868464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.868855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.869262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.869276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.869629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.870006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.870019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.870383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.870729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.870745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.871096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.871539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.871553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.871953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.872256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.872269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.872572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.872941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.872954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.873209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.873574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.873587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.873972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.874376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.874389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.874741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.875163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.875177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.875535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.875919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.875933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.876368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.876593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.876608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.876914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.877366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.877379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.877743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.878113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.878126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.878494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.878896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.878909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.879321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.879594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.879607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.880052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.880474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.880487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.880868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.881311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.881324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.881748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.882173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.882187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.882546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.883176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.883189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.883648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.884024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.884037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.884427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.885036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.885055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.885470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.885820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.885833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.964 [2024-07-23 14:11:33.886242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.886589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.964 [2024-07-23 14:11:33.886602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.964 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.886870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.887313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.887327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.887723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.888148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.888162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.888528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.888882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.888895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.889318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.889574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.889587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.890057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.890424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.890437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.890798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.891175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.891188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.891561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.892009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.892022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.892519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.892945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.892958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.893369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.893718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.893731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.894091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.894388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.894402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.894692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.895055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.895068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.895420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.895905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.895919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.896266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.896628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.896641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.897039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.897404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.897417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.897825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.898249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.898263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.898575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.899001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.899014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.899382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.899830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.899843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.900095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.900430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.900443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.900793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.901246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.901259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.901620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.902069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.902083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.902496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.902917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.902930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.903273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.903704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.903717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.904130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.904525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.904541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.965 [2024-07-23 14:11:33.904965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.905419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.965 [2024-07-23 14:11:33.905433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.965 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.905866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.906242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.906255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.906613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.906983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.906996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.907417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.907799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.907812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.908241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.908632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.908645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.909012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.909414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.909428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.909784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.910155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.910169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.910598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.910958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.910971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.911336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.911702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.911715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.912071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.912510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.912526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.912881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.913158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.913172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.913555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.913985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.913998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.914221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.914631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.914644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.915013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.915450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.915464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.915815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.916181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.916195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.916555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.916835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.916849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.917264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.917699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.917712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.918016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.918468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.918482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.918913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.919303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.919316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.919665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.920120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.920134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.920482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.920830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.920843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.921205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.921558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.921571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.922029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.922230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.922244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.922653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.923048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.923062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.923437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.923814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.923827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.924193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.924596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.924609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.966 [2024-07-23 14:11:33.925060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.925495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.966 [2024-07-23 14:11:33.925508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.966 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.925860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.926182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.926195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.926534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.926873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.926886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.927289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.927714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.927727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.928087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.928512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.928525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.928882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.929248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.929260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.929643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.929994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.930007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.930441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.930849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.930862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.931217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.931628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.931641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.932060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.932357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.932370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.932722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.933075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.933088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.933436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.933799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.933812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.934162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.934498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.934511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.934771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.935069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.935082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.935374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.935717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.935730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.936090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.936442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.936455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.936894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.937297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.937311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.937695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.938041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.938059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.938466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.938882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.938895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.939248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.939624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.939637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.939992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.940453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.940466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.940914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.941340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.941354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.941647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.942075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.942088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.942545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.942971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.942984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.943400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.943758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.943771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.944214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.944653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.944666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.945094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.945519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.945533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.945915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.946285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.946298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.946590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.947031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.947048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.967 qpair failed and we were unable to recover it. 00:29:42.967 [2024-07-23 14:11:33.947396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.967 [2024-07-23 14:11:33.947752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.947765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.948149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.948488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.948501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.948869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.949155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.949169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.949458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.949862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.949875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.950232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.950590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.950603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.950961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.951404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.951421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.951769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.952276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.952290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.952597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.952954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.952967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.953328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.953682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.953695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.954072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.954495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.954508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.954668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.955039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.955056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.955460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.955794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.955807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.956158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.956501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.956514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.956858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.957290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.957303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.957717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.958065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.958078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.958241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.958601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.958614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.958965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.959315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.959328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.959687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.960093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.960106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:42.968 [2024-07-23 14:11:33.960466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.960822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:42.968 [2024-07-23 14:11:33.960836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:42.968 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.961248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.961609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.961622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.961927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.962213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.962227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.962549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.962891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.962904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.963324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.963630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.963643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.963798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.964096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.964110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.964472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.964893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.964906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.965267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.965636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.965649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.966009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.966439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.966452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.966858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.967262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.967275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.967714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.968012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.968025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.968448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.968666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.968678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.969041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.969395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.969409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.969709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.970111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.970125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.970512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.970862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.970875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.971305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.971658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.971671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.972056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.972354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.972367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.972791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.973132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.973145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1246710 with addr=10.0.0.2, port=4420 00:29:43.259 qpair failed and we were unable to recover it. 00:29:43.259 [2024-07-23 14:11:33.973452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.259 [2024-07-23 14:11:33.973839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.973856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.973995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.974424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.974438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.974824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.975194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.975210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.975483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.975780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.975793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.976224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.976632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.976645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.977052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.977416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.977429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.977835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.978143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.978158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.978519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.978947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.978960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.979260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.979616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.979629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.979968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.980237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.980252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.980448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.980809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.980823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.981191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.981632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.981645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.982086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.982389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.982403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.982742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.983111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.983125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.983501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.983906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.983920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.984341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.984694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.984707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.985064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.985412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.985425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.985847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.986270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.986285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.986426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.986788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.986802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.987154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.987527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.987540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.987957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.988357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.988371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.988752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.989106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.989121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.989570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.989997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.990010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.990436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.990805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.990818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.991247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.991618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.991632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.992060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.992480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.992493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.992869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.993207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.993221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.993574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.993930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.993943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.260 qpair failed and we were unable to recover it. 00:29:43.260 [2024-07-23 14:11:33.994136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.260 [2024-07-23 14:11:33.994493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.994506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.994857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.995143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.995156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.995580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.995939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.995952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.996308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.996654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.996667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.997096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.997523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.997537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.997974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.998398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.998411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.998839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.999262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:33.999276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:33.999631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.000009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.000022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.000456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.000876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.000889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.001269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.001673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.001686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.002032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.002467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.002480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.002843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.003206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.003219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.003611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.003949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.003963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.004398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.004762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.004775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.005232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.005638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.005651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.006003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.006348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.006361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.006700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.007083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.007096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.007457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.007860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.007873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.008298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.008671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.008684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.009103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.009515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.009528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.009959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.010321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.010335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.010775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.011203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.011217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.011599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.011942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.011958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.012334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.012685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.012698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.013128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.013574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.013587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.014000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.014424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.014437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.014876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.015306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.015320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.261 qpair failed and we were unable to recover it. 00:29:43.261 [2024-07-23 14:11:34.015667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.261 [2024-07-23 14:11:34.016109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.016124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.016530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.016891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.016905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.017347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.017718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.017732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.018164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.018510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.018523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.018865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.019165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.019179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.019585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.019955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.019971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.020327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.020691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.020704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.020847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.021203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.021216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.021621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.021981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.021994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.022368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.022557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.022570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.022996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.023347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.023361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.023713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.024134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.024148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.024558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.024910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.024924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.025233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.025661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.025674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.026051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.026484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.026497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.026955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.027381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.027397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.027748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.028049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.028062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.028371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.028718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.028731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.029138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.029546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.029559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.029987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.030323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.030336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.030586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.030992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.031005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.031448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.031746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.031759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.032116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.032428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.032441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.032797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.033254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.033267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.033636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.033928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.033942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.034359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.034732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.034751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.035341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.035753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.035766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.036220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.036647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.262 [2024-07-23 14:11:34.036660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.262 qpair failed and we were unable to recover it. 00:29:43.262 [2024-07-23 14:11:34.037093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.037530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.037544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.037947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.038397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.038412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.038772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.039066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.039079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.039383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.039739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.039753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.040129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.040488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.040502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.040929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.041296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.041311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.041677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.042051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.042065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.042471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.042893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.042907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.043285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.043580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.043593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.043971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.044326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.044339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.044745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.045104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.045118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.045472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.045818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.045832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.046242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.046522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.046535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.046896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.047274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.047288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.047781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.048086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.048099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.048667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.048861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.048874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.049333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.049745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.049758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.050061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.050436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.050448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.050805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.051231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.051245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.051602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.051956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.051969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.052329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.052622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.052635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.052924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.053216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.053230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.053633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.053989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.054002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.054353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.054710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.054723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.055030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.055415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.055429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.055775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.056154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.056168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.263 qpair failed and we were unable to recover it. 00:29:43.263 [2024-07-23 14:11:34.056594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.263 [2024-07-23 14:11:34.057017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.057031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.057376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.057731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.057744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.058124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.058413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.058426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.058788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.059095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.059108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.059514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.059859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.059872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.060240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.060539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.060553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.060909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.061251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.061264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.061623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.061927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.061940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.062282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.062696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.062710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.063131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.063329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.063341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.063709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.064069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.064083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.064509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.064888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.064901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.065184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.065544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.065557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.066012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.066373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.066386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.066736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.067034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.067052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.067402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.067828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.067842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.068128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.068485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.068498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.068785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.069166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.069179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.069537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.069807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.069820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.070173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.070526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.070539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.070893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.071254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.071268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.071705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.072131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.072145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.264 [2024-07-23 14:11:34.072559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.072969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.264 [2024-07-23 14:11:34.072982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.264 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.073351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.073761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.073774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.074128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.074504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.074517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.074856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.075211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.075225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.075632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.075912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.075925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.076364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.076721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.076734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.077045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.077400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.077413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.077690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.078093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.078107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.078482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.078887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.078900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.079247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.079651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.079664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.080041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.080239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.080252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.080626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.081028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.081041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.081393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.081740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.081753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.082092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.082368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.082381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.082785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.083194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.083207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.083594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.083949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.083962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.084369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.084747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.084759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.085146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.085576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.085590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.086039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.086396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.086409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.086839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.087203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.087217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.087571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.087985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.087998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.088349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.088714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.088728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.089159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.089296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.089309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.089603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.090030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.090046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.090389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.090749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.090763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.091117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.091401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.091414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.091757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.092188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.092201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.092584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.092928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.092941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.265 qpair failed and we were unable to recover it. 00:29:43.265 [2024-07-23 14:11:34.093374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.093675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.265 [2024-07-23 14:11:34.093688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.094048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.094408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.094421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c8000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.094884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.095230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.095242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.095646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.096001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.096010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.096432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.096820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.096829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.097175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.097465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.097474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.097823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.098221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.098231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.098575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.098998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.099007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.099285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.099706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.099715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.100059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.100419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.100429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.100761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.101099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.101109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.101511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.101895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.101905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.102321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.102674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.102684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.103085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.103508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.103518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.103955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.104322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.104332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.104714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.105109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.105119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.105542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.105908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.105917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.106269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.106602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.106612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.106960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.107421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.107431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.107830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.108199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.108209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.108569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.108910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.108919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.109275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.109673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.109683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.109970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.110159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.110169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.110568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.110969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.110978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.111341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.111737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.111747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.112144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.112547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.112557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.112928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.113350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.113359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.266 qpair failed and we were unable to recover it. 00:29:43.266 [2024-07-23 14:11:34.113724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.266 [2024-07-23 14:11:34.114076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.114086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.114423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.114819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.114829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.115178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.115589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.115599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.115932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.116275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.116285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.116706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.117068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.117078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.117429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.117825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.117835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.118029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.118451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.118461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.118860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.119237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.119247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.119663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.120018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.120028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.120447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.120843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.120852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.121141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.121432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.121441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.121718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.121925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.121934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.122357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.122726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.122735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.123165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.123535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.123544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.123917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.124250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.124260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.124683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.125068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.125078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.125503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.125886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.125895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.126231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.126421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.126431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.126771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.127187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.127197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.127551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.127976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.127986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.128328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.128741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.128751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.129159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.129446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.129456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.129786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.130189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.130199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.130608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.131048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.131058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.131404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.131735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.131745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.132170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.132537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.132549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.132853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.133140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.133149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.133516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.136487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.267 [2024-07-23 14:11:34.136497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.267 qpair failed and we were unable to recover it. 00:29:43.267 [2024-07-23 14:11:34.136918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.137290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.137300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.137580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.137937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.137946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.138242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.138649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.138659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.139006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.139377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.139387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.139810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.140166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.140176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.140619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.141052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.141061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.141458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.141879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.141888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.142231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.142597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.142608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.143005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.143349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.143358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.143778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.144185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.144195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.144559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.144966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.144975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.145327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.145459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.145468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.145885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.146232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.146242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.146616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.146959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.146969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.147321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.147661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.147671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.148088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.148437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.148446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.148877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.149282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.149292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.149639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.149967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.149978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.150375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.150732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.150742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.151101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.151528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.151538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.151945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.152341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.152351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.152698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.153111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.153121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.153505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.153837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.153847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.268 qpair failed and we were unable to recover it. 00:29:43.268 [2024-07-23 14:11:34.154245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.154610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.268 [2024-07-23 14:11:34.154620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.154988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.155404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.155413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.155791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.156133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.156143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.156562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.156912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.156921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.157286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.157500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.157510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.157910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.158260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.158270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.158615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.158955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.158965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.159355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.159779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.159788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.160141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.160565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.269 [2024-07-23 14:11:34.160575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.269 qpair failed and we were unable to recover it. 00:29:43.269 [2024-07-23 14:11:34.160910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.161330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.161339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.161681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.162055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.162066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.162415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.162813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.162822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.163170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.163510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.163520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.163808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.164229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.164239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.164429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.164764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.164774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.165133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.165529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.165539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.165936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.166272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.166282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.166700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.167120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.167130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.167495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.167886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.167896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.168226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.168642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.168652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.169020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.169390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.169399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.169768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.170117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.170127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.170401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.170798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.170808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.171157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.171505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.171515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.171957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.172307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.172316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.172613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.172954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.172964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.173259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.173540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.173549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.173909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.174307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.174317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.174664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.175090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.175099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.175449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.175849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.175859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.176223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.176650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.176659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.177007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.177355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.177365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.177668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.178011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.178021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.178387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.178732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.178742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.179020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.179365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.270 [2024-07-23 14:11:34.179375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.270 qpair failed and we were unable to recover it. 00:29:43.270 [2024-07-23 14:11:34.179808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.180204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.180214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.180561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.180934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.180944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.181361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.181716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.181725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.182143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.182542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.182551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.182975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.183175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.183185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.183514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.183862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.183872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.184267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.184598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.184607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.184980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.185395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.185405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.185740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.186071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.186081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.186531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.186927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.186937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.187280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.187640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.187650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.188007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.188429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.188439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.188783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.189152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.189162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.189510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.189856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.189866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.190216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.190567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.190577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.190865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.191207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.191217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.191549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.191889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.191899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.192245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.192657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.192667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.192806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.193154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.193164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.193587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.193985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.193994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.194417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.194841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.194851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.195204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.195600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.195609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.195980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.196310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.196320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.196584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.196871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.196881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.197237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.197594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.197604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.197817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.198183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.198193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.198551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.198905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.198915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.199334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.199679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.271 [2024-07-23 14:11:34.199689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.271 qpair failed and we were unable to recover it. 00:29:43.271 [2024-07-23 14:11:34.200045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.200411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.200421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.200820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.201217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.201226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.201651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.202003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.202012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.202433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.202779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.202788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.202977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.203325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.203334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.203758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.204153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.204163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.204559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.204970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.204979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.205258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.205655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.205665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.206019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.206403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.206413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.206837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.207184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.207194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.207591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.208017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.208027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.208394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.208577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.208586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.208860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.209279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.209289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.209714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.210083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.210092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.210385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.210732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.210741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.211081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.211429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.211439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.211838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.212060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.212071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.212470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.212810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.212820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.213218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.213405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.213415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.213836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.214255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.214265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.214670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.215079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.215090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.215464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.215795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.215805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.216134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.216519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.216528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.216864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.217211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.217221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.217648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.217838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.217848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.218161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.218452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.218461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.218883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.219334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.272 [2024-07-23 14:11:34.219344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.272 qpair failed and we were unable to recover it. 00:29:43.272 [2024-07-23 14:11:34.219694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.220128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.220138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.220501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.220789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.220799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.221195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.221636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.221646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.221832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.222228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.222238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.222638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.222968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.222978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.223388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.223796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.223806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.224208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.224548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.224558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.224955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.225298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.225308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.225653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.225996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.226005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.226356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.226696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.226705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.227071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.227467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.227476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.227911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.228257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.228267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.228664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.229007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.229017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.229435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.229797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.229807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.230155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.230571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.230581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.230928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.231282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.231298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.231710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.232134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.232145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.232565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.232931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.232941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.233285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.233635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.233645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.233936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.234213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.234224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.234571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.234936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.234946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.235378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.235830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.235840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.236242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.236591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.236602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.236887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.237240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.237250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.237672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.238012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.238021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.238502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.238852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.238863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.239154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.239294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.239304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.239681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.240012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.240022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.273 qpair failed and we were unable to recover it. 00:29:43.273 [2024-07-23 14:11:34.240419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.273 [2024-07-23 14:11:34.240865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.240874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.241270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.241625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.241634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.241914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.242262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.242272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.242672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.242860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.242870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.243292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.243713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.243722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.244004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.244354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.244364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.244788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.245130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.245140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.245442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.245839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.245852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.246193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.246596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.246606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.246946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.247295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.247305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.247747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.248103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.248114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.248512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.248725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.248735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.249184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.249526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.249536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.249959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.250312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.250323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.250604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.250901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.250912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.251267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.251667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.251677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.252026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.252382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.252392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.252825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.253233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.253246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.253675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.254059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.254070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.254348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.254745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.254755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.254942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.255289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.255300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.255651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.256051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.256061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.256414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.256745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.256755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.274 [2024-07-23 14:11:34.257090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.257459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.274 [2024-07-23 14:11:34.257469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.274 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.257770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.258192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.258203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.258569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.258863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.258873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.259271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.259620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.259630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.260030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.260432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.260444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.260865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.261227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.261238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.261584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.261719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.261729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.262101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.262453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.262463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.262797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.263131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.263141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.263495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.263920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.263930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.264345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.264764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.264774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.265185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.265590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.265600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.265886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.266253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.266263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.266553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.266949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.266959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.267147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.267339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.275 [2024-07-23 14:11:34.267350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.275 qpair failed and we were unable to recover it. 00:29:43.275 [2024-07-23 14:11:34.267764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.542 [2024-07-23 14:11:34.268130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.268140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.268490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.268908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.268918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.269205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.269475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.269484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.269910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.270307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.270317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.270653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.271093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.271103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.271521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.271826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.271836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.272235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.272682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.272692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.273136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.273480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.273490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.273791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.274214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.274224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.274525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.274881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.274891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.275239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.275525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.275535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.275815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.276209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.276219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.276643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.277007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.277017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.277470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.277814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.277823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.278175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.278505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.278515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.278706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.278894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.278904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.279327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.279748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.279758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.280155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.280368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.280377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.280803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.281223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.281233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.543 qpair failed and we were unable to recover it. 00:29:43.543 [2024-07-23 14:11:34.281630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.543 [2024-07-23 14:11:34.281919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.281928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.282287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.282569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.282579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.282952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.283287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.283297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.283650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.284071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.284081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.284416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.284677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.284686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.285053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.285338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.285347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.285535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.285883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.285893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.286237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.286448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.286457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.286868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.287267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.287277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.287685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.287976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.287986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.288432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.288617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.288627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.289056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.289431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.289440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.289867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.290290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.290298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.290702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.291051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.291059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.291482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.291844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.291852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.292265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.292617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.292628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.292933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.293355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.293368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.293724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.294138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.294150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.294590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.294951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.294963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.295365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.295711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.295724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.296059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.296355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.296369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.544 qpair failed and we were unable to recover it. 00:29:43.544 [2024-07-23 14:11:34.296726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.544 [2024-07-23 14:11:34.297073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.297085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.297430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.297913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.297924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.298128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.298431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.298445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.298812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.299234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.299249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.299654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.300058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.300071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.300483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.300890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.300904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.301319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.301673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.301688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.302113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.302483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.302495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.302877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.303209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.303222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.303592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.303717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.303729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.304164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.304518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.304530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.304894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.305243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.305256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.305536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.305824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.305837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.306180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.306526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.306537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.306831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.307178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.307189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.307481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.307852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.307864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.308266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.308647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.308659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.309062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.309409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.309420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.309611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.309889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.309900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.310190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.310384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.310397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.310803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.311339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.311350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.311537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.311822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.311834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.312120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.312402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.312414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.312820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.313175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.313185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.313609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.314007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.545 [2024-07-23 14:11:34.314017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.545 qpair failed and we were unable to recover it. 00:29:43.545 [2024-07-23 14:11:34.314205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.314556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.314565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.314915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.315267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.315277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.315633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.315997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.316007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.316354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.316688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.316698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.317053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.317417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.317428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.317772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.318123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.318133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.318464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.318805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.318815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.319091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.319509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.319519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.319733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.320020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.320031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.320347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.320639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.320649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.320852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.321214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.321224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.321530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.321888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.321898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.322185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.322530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.322540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.322879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.323218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.323228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.323568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.323900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.323910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.324259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.324623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.324633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.324999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.325335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.325346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.325656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.325994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.326004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.326405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.326691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.326703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.327054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.327341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.327352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.327634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.327974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.327984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.328112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.328413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.328423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.328705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.329064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.329075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.329207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.329559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.329569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.329833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.330176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.330187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.330541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.330896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.330907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.331409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.331748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.331758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.332187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.332478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.332488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.546 qpair failed and we were unable to recover it. 00:29:43.546 [2024-07-23 14:11:34.332831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.333119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.546 [2024-07-23 14:11:34.333130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.333471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.333805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.333815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.334188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.334538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.334548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.334837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.335132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.335143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.335548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.335882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.335893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.336297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.336579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.336588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.336859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.337138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.337149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.337545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.337831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.337841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.338180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.338515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.338525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.338652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.339009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.339019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.339357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.339778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.339789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.340201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.340500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.340511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.340863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.341161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.341171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.341449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.341788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.341799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.342076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.342359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.342369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.342640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.342923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.342933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.343303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.343632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.343642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.343975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.344282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.344293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.344578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.344976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.344987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.345270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.345548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.345559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.345823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.346110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.346121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.346401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.346786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.346796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.347093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.347442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.347453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.347801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.348150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.348161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.348350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.348617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.348627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.348959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.349157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.349168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.349593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.349942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.349953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.350299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.350641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.350652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.547 [2024-07-23 14:11:34.350917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.351210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.547 [2024-07-23 14:11:34.351220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.547 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.351618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.351908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.351918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.352264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.352612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.352622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.352967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.353334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.353344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.353623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.354180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.354190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.354534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.354812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.354822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.355166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.355452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.355463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.355805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.356086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.356097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.356441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.356777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.356787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.357205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.357560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.357572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.357904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.358194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.358205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.358474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.358810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.358820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.359169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.359571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.359581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.360014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.360356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.360366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.360773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.361117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.361128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.361412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.361813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.361824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.362207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.362633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.362642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.362993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.363280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.363291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.363671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.364089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.364100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.364381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.364649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.364661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.364785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.365149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.365160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.365585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.365986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.365997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.366274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.366410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.366420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.366788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.367078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.548 [2024-07-23 14:11:34.367089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.548 qpair failed and we were unable to recover it. 00:29:43.548 [2024-07-23 14:11:34.367374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.367724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.367734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.368090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.368428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.368439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.368776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.369039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.369054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.369397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.369770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.369781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.370052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.370342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.370352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.370692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.370975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.370988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.371321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.371664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.371674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.372079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.372230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.372240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.372512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.372789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.372799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.373080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.373414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.373424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.373687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.373968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.373978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.374383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.374672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.374683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.375036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.375401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.375412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.375750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.376051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.376062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.376411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.376708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.376718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.377054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.377427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.377440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.377596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.377962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.377972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.378264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.378596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.378606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.378894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.379180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.379190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.379475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.379816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.379826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.380227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.380569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.380579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.380920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.381212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.381222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.381410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.381776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.381786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.382131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.382373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.382383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.382664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.382949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.382959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.383292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.383588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.383598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.383872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.384210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.384221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.384495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.384911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.384921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.549 qpair failed and we were unable to recover it. 00:29:43.549 [2024-07-23 14:11:34.385209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.549 [2024-07-23 14:11:34.385490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.385501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.385787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.386133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.386143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.386434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.386774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.386784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.387189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.387467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.387478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.387834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.388114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.388126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.388394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.388724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.388734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.389025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.389386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.389396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.389728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.390095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.390106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.390452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.390815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.390825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.391159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.391428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.391438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.391715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.391998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.392008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.392299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.392590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.392600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.392938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.393277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.393287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.393584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.393930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.393940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.394303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.394670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.394679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.395015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.395330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.395341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.395718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.396007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.396017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.396390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.396890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.396899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.397274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.397584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.397604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.397896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.398319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.398349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.398562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.398871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.398890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.399195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.399492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.399515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.399877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.400191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.400216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.400596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.400919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.400944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.401324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.401473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.401495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.401805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.402196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.402223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.402587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.402897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.402922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.403228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.403542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.403567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.550 qpair failed and we were unable to recover it. 00:29:43.550 [2024-07-23 14:11:34.403960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.550 [2024-07-23 14:11:34.404287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.404312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.404674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.405112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.405137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.405439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.405808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.405832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.406157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.406326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.406344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.406698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.407130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.407158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.407434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.407759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.407783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.408140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.408590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.408616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.409049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.409441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.409467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.409888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.410260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.410276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.410628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.411005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.411021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.411338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.411623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.411637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.411991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 14:11:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:43.551 [2024-07-23 14:11:34.412293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.412304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 14:11:34 -- common/autotest_common.sh@852 -- # return 0 00:29:43.551 14:11:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:43.551 14:11:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:43.551 [2024-07-23 14:11:34.412677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.551 [2024-07-23 14:11:34.413051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.413062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.413411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.413601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.413612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.413888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.414219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.414230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.414417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.414756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.414767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.415177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.415461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.415472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.415760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.416273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.416283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.416632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.416945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.416957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.417467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.417748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.417758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.418028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.418315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.418325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.418603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.418883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.418893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.551 qpair failed and we were unable to recover it. 00:29:43.551 [2024-07-23 14:11:34.419190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.419473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.551 [2024-07-23 14:11:34.419483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.419822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.420291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.420304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.420654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.420994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.421005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.421394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.421970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.421980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.422259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.422531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.422541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.422831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.423242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.423253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.423531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.423874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.423883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.424103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.424414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.424425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.424769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.425047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.425058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.425427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.425705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.425715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.425995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.426263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.426273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.426633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.426921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.426931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.427267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.427553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.427563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.427842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.428196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.428207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.428448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.428735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.428745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.429020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.429398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.429408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.429771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.430148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.430159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.430498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.430799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.430811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.431105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.431403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.431413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.431857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.432139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.432151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.552 qpair failed and we were unable to recover it. 00:29:43.552 [2024-07-23 14:11:34.432590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.552 [2024-07-23 14:11:34.432877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.432887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.433098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.433395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.433405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.433706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.433986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.433997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.434291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.434573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.434583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.434877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.435218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.435229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.435579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.435870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.435881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.436238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.436533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.436542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.436986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.437256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.437270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.437549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.437833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.437844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.438131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.438415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.438425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.438916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.439261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.439271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.439611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.439737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.439747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.440021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.440304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.440315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.440595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.440888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.440898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.553 [2024-07-23 14:11:34.441195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.441492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.553 [2024-07-23 14:11:34.441501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.553 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.441861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.442138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.442149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.442486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.442775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.442785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.443150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.443431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.443443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.443753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.444039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.444053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.444326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.444543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.444552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.444841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.445182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.445193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.445492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 14:11:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.554 [2024-07-23 14:11:34.445766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.445777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 14:11:34 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.554 [2024-07-23 14:11:34.446128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 14:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.554 [2024-07-23 14:11:34.446476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.446488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.554 [2024-07-23 14:11:34.446832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.447124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.447134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.447425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.447765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.447776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.448112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.448410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.448419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.448703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.448980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.448990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.449331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.449634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.449643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.449991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.450264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.450274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.450552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.451099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.451109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.451474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.451745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.451755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.452108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.452476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.452485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.452761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.453103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.453114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.554 qpair failed and we were unable to recover it. 00:29:43.554 [2024-07-23 14:11:34.453461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.554 [2024-07-23 14:11:34.453795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.453804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.454093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.454388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.454399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.454695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.454964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.454974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.455261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.455662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.455673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.455953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.456292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.456305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.456641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.456918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.456929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.457209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.457490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.457500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.457781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.458125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.458139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.458483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.458840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.458851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.459140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.459424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.459436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.459735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.460084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.460096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.460375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.460714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.460728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.461028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.461418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.461429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.461712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.462004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.462015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.462302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.462584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.462594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.462875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.463142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.463153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 Malloc0 00:29:43.555 [2024-07-23 14:11:34.463494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 14:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.555 [2024-07-23 14:11:34.463839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.463850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 14:11:34 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:43.555 14:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.555 [2024-07-23 14:11:34.464139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.555 [2024-07-23 14:11:34.464418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.464428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.464771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.465062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.465072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.555 qpair failed and we were unable to recover it. 00:29:43.555 [2024-07-23 14:11:34.465361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.465706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.555 [2024-07-23 14:11:34.465715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.465991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.466255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.466264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.466608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.466893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.466904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.467138] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.556 [2024-07-23 14:11:34.467180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.467467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.467477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.467754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.468041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.468054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.468325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.468662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.468672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.468949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.469311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.469321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.469596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.469933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.469943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.470286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.470634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.470644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.470911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.471054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.471064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.471404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.471754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.471764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.472119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.472395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.472405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.472701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.472991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.473001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.473273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.473694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.473703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.473978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.474267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.474277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.474561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.474840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.474850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.475182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 14:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.556 14:11:34 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.556 [2024-07-23 14:11:34.475552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.475561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 14:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.556 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.556 [2024-07-23 14:11:34.475925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.476266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.476276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.556 [2024-07-23 14:11:34.476692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.476991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.556 [2024-07-23 14:11:34.477001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.556 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.477275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.477554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.477564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.477906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.478191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.478201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.478545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.478890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.478900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.479243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.479521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.479531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.479869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.480240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.480253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.480600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.481000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.481009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.481289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.481576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.481586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.481878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.482228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.482238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.482505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.482801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.482810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.483154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 14:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.557 [2024-07-23 14:11:34.483436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.483446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 14:11:34 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.557 14:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.557 [2024-07-23 14:11:34.483777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.557 [2024-07-23 14:11:34.484230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.484240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.484529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.484886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.484896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.485257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.485594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.485604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.486031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.486371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.486382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.486738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.487096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.487107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.487435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.487564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.487574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.487860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.488216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.488226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.488361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.488694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.488704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.557 [2024-07-23 14:11:34.488993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.489352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.557 [2024-07-23 14:11:34.489362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.557 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.489764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.490067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.490077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.490355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.490689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.490699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.490971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.491305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.491315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 14:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.558 14:11:34 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.558 [2024-07-23 14:11:34.491603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 14:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.558 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.558 [2024-07-23 14:11:34.491939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.491949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.492361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.492644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.492654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.492920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.493282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.493292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.493627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.493966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.493976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.494282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.494554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.494564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.494858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.495222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:43.558 [2024-07-23 14:11:34.495232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69c0000b90 with addr=10.0.0.2, port=4420 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.495364] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.558 [2024-07-23 14:11:34.497701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.558 [2024-07-23 14:11:34.497840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.558 [2024-07-23 14:11:34.497861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.558 [2024-07-23 14:11:34.497868] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.558 [2024-07-23 14:11:34.497874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.558 [2024-07-23 14:11:34.497896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 14:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.558 14:11:34 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:43.558 14:11:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.558 14:11:34 -- common/autotest_common.sh@10 -- # set +x 00:29:43.558 14:11:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.558 14:11:34 -- host/target_disconnect.sh@58 -- # wait 3436114 00:29:43.558 [2024-07-23 14:11:34.507732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.558 [2024-07-23 14:11:34.507901] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.558 [2024-07-23 14:11:34.507921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.558 [2024-07-23 14:11:34.507928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.558 [2024-07-23 14:11:34.507937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.558 [2024-07-23 14:11:34.507955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.517759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.558 [2024-07-23 14:11:34.517888] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.558 [2024-07-23 14:11:34.517905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.558 [2024-07-23 14:11:34.517912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.558 [2024-07-23 14:11:34.517918] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.558 [2024-07-23 14:11:34.517934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.558 [2024-07-23 14:11:34.527683] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.558 [2024-07-23 14:11:34.527818] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.558 [2024-07-23 14:11:34.527834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.558 [2024-07-23 14:11:34.527841] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.558 [2024-07-23 14:11:34.527847] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.558 [2024-07-23 14:11:34.527864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.558 qpair failed and we were unable to recover it. 00:29:43.559 [2024-07-23 14:11:34.537698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.559 [2024-07-23 14:11:34.537825] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.559 [2024-07-23 14:11:34.537842] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.559 [2024-07-23 14:11:34.537849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.559 [2024-07-23 14:11:34.537855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.559 [2024-07-23 14:11:34.537871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.559 qpair failed and we were unable to recover it. 00:29:43.559 [2024-07-23 14:11:34.547737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.559 [2024-07-23 14:11:34.547858] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.559 [2024-07-23 14:11:34.547875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.559 [2024-07-23 14:11:34.547881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.559 [2024-07-23 14:11:34.547888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.559 [2024-07-23 14:11:34.547904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.559 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.557715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.557841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.557858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.557864] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.557870] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.557886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.567777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.567959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.567978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.567987] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.567994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.568011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.577824] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.577959] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.577977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.577988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.577995] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.578014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.587845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.587963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.587981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.587988] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.587994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.588011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.597876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.597999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.598016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.598023] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.598033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.598056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.607888] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.608010] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.608027] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.608034] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.608039] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.608065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.617948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.618080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.618098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.618105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.618111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.618128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.627927] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.628054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.628071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.628078] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.628084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.628100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.637922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.638056] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.638073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.638080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.638085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.638102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.647968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.648096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.648113] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.648119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.648125] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.820 [2024-07-23 14:11:34.648142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.820 qpair failed and we were unable to recover it. 00:29:43.820 [2024-07-23 14:11:34.658018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.820 [2024-07-23 14:11:34.658149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.820 [2024-07-23 14:11:34.658166] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.820 [2024-07-23 14:11:34.658172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.820 [2024-07-23 14:11:34.658178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.658195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.668136] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.668258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.668274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.668281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.668287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.668303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.678127] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.678244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.678261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.678267] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.678273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.678289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.688129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.688288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.688306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.688316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.688323] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.688339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.698141] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.698284] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.698300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.698307] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.698313] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.698329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.708382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.708519] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.708536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.708543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.708549] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.708566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.718193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.718313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.718330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.718337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.718343] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.718359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.728243] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.728399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.728416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.728422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.728428] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.728444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.738396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.738555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.738571] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.738578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.738584] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.738601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.748346] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.748481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.748498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.748505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.748511] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.748527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.758493] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.758622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.758638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.758645] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.758650] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.758667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.768404] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.768527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.768544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.768550] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.768556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.768572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.778549] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.778670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.778687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.778697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.778703] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.778719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.821 [2024-07-23 14:11:34.788499] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.821 [2024-07-23 14:11:34.788637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.821 [2024-07-23 14:11:34.788655] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.821 [2024-07-23 14:11:34.788662] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.821 [2024-07-23 14:11:34.788668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.821 [2024-07-23 14:11:34.788684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.821 qpair failed and we were unable to recover it. 00:29:43.822 [2024-07-23 14:11:34.798489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.822 [2024-07-23 14:11:34.798706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.822 [2024-07-23 14:11:34.798723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.822 [2024-07-23 14:11:34.798730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.822 [2024-07-23 14:11:34.798736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.822 [2024-07-23 14:11:34.798752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.822 qpair failed and we were unable to recover it. 00:29:43.822 [2024-07-23 14:11:34.808525] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.822 [2024-07-23 14:11:34.808652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.822 [2024-07-23 14:11:34.808668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.822 [2024-07-23 14:11:34.808675] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.822 [2024-07-23 14:11:34.808681] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.822 [2024-07-23 14:11:34.808697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.822 qpair failed and we were unable to recover it. 00:29:43.822 [2024-07-23 14:11:34.818538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.822 [2024-07-23 14:11:34.818669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.822 [2024-07-23 14:11:34.818687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.822 [2024-07-23 14:11:34.818697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.822 [2024-07-23 14:11:34.818705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.822 [2024-07-23 14:11:34.818723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.822 qpair failed and we were unable to recover it. 00:29:43.822 [2024-07-23 14:11:34.828552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:43.822 [2024-07-23 14:11:34.828675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:43.822 [2024-07-23 14:11:34.828692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:43.822 [2024-07-23 14:11:34.828699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:43.822 [2024-07-23 14:11:34.828705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:43.822 [2024-07-23 14:11:34.828723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:43.822 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.838602] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.838722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.838739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.838746] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.838751] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.838768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.848604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.848725] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.848742] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.848749] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.848755] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.848772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.858647] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.858777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.858794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.858801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.858807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.858823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.868663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.868783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.868803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.868809] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.868815] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.868831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.878693] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.878817] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.878834] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.878840] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.878846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.878862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.888717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.888836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.888852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.888860] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.888865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.888882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.898760] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.898887] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.898904] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.898911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.898917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.898934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.908792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.908916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.908930] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.908937] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.908943] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.908961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.085 [2024-07-23 14:11:34.918816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.085 [2024-07-23 14:11:34.918939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.085 [2024-07-23 14:11:34.918956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.085 [2024-07-23 14:11:34.918963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.085 [2024-07-23 14:11:34.918969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.085 [2024-07-23 14:11:34.918986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.085 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.928835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.928983] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.929000] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.929007] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.929013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.929029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.938813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.938956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.938972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.938979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.938985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.939001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.948840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.948960] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.948977] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.948984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.948990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.949005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.958938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.959064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.959084] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.959090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.959096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.959112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.968940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.969078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.969094] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.969101] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.969106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.969122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.978987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.979123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.979140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.979147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.979153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.979170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.989022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.989145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.989162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.989169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.989174] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.989190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:34.999055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:34.999177] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:34.999193] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:34.999200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:34.999206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:34.999226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.009098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.009225] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.009242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.009249] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.009255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.009271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.019110] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.019233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.019250] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.019256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.019262] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.019278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.029133] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.029251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.029267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.029274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.029280] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.029296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.039157] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.039285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.039302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.039309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.039315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.039331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.049120] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.049243] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.049261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.049268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.049274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.049290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.059200] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.059328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.059345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.059351] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.059357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.059373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.069218] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.069367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.069383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.069390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.069396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.069413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.079412] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.079538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.079556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.079563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.079568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.079585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.089299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.089420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.089436] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.089443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.089453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.089470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.086 [2024-07-23 14:11:35.099272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.086 [2024-07-23 14:11:35.099405] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.086 [2024-07-23 14:11:35.099422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.086 [2024-07-23 14:11:35.099428] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.086 [2024-07-23 14:11:35.099434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.086 [2024-07-23 14:11:35.099450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.086 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.109364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.109482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.109498] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.109505] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.109510] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.109526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.119427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.119561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.119577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.119584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.119590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.119606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.129409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.129532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.129548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.129555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.129561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.129578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.139451] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.139576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.139593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.139600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.139606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.139622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.149475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.149597] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.149613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.149620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.149625] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.149642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.159515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.159638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.159654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.159661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.159667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.159683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.169577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.169704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.169720] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.169727] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.169733] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.169749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.345 [2024-07-23 14:11:35.179564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.345 [2024-07-23 14:11:35.179707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.345 [2024-07-23 14:11:35.179724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.345 [2024-07-23 14:11:35.179734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.345 [2024-07-23 14:11:35.179740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.345 [2024-07-23 14:11:35.179756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.345 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.189592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.189715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.189732] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.189739] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.189744] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.189761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.199650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.199812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.199829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.199835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.199841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.199857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.209584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.209706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.209722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.209729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.209734] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.209751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.219669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.219790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.219806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.219813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.219819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.219835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.229687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.229811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.229827] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.229834] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.229840] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.229856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.239710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.239829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.239845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.239852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.239858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.239874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.249833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.249961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.249978] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.249985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.249991] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.250007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.259836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.259973] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.259989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.259996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.260001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.260017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.269819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.269938] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.269954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.269966] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.269972] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.269989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.279836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.279956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.279972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.279979] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.279986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.280002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.289821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.289947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.289964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.289970] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.289976] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.289992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.299899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.300018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.300034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.300040] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.300051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.300068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.346 qpair failed and we were unable to recover it. 00:29:44.346 [2024-07-23 14:11:35.309932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.346 [2024-07-23 14:11:35.310053] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.346 [2024-07-23 14:11:35.310069] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.346 [2024-07-23 14:11:35.310077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.346 [2024-07-23 14:11:35.310082] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.346 [2024-07-23 14:11:35.310099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.347 qpair failed and we were unable to recover it. 00:29:44.347 [2024-07-23 14:11:35.319953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.347 [2024-07-23 14:11:35.320083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.347 [2024-07-23 14:11:35.320103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.347 [2024-07-23 14:11:35.320113] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.347 [2024-07-23 14:11:35.320122] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.347 [2024-07-23 14:11:35.320140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.347 qpair failed and we were unable to recover it. 00:29:44.347 [2024-07-23 14:11:35.329982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.347 [2024-07-23 14:11:35.330156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.347 [2024-07-23 14:11:35.330174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.347 [2024-07-23 14:11:35.330181] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.347 [2024-07-23 14:11:35.330187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.347 [2024-07-23 14:11:35.330205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.347 qpair failed and we were unable to recover it. 00:29:44.347 [2024-07-23 14:11:35.340028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.347 [2024-07-23 14:11:35.340153] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.347 [2024-07-23 14:11:35.340170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.347 [2024-07-23 14:11:35.340177] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.347 [2024-07-23 14:11:35.340183] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.347 [2024-07-23 14:11:35.340200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.347 qpair failed and we were unable to recover it. 00:29:44.347 [2024-07-23 14:11:35.350030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.347 [2024-07-23 14:11:35.350157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.347 [2024-07-23 14:11:35.350174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.347 [2024-07-23 14:11:35.350180] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.347 [2024-07-23 14:11:35.350186] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.347 [2024-07-23 14:11:35.350203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.347 qpair failed and we were unable to recover it. 00:29:44.347 [2024-07-23 14:11:35.360072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.347 [2024-07-23 14:11:35.360191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.347 [2024-07-23 14:11:35.360219] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.347 [2024-07-23 14:11:35.360226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.347 [2024-07-23 14:11:35.360232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.347 [2024-07-23 14:11:35.360250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.347 qpair failed and we were unable to recover it. 00:29:44.607 [2024-07-23 14:11:35.370081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.607 [2024-07-23 14:11:35.370203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.607 [2024-07-23 14:11:35.370220] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.607 [2024-07-23 14:11:35.370227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.607 [2024-07-23 14:11:35.370233] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.607 [2024-07-23 14:11:35.370250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.607 qpair failed and we were unable to recover it. 00:29:44.607 [2024-07-23 14:11:35.380122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.607 [2024-07-23 14:11:35.380249] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.607 [2024-07-23 14:11:35.380265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.607 [2024-07-23 14:11:35.380272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.607 [2024-07-23 14:11:35.380277] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.607 [2024-07-23 14:11:35.380293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.607 qpair failed and we were unable to recover it. 00:29:44.607 [2024-07-23 14:11:35.390160] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.607 [2024-07-23 14:11:35.390280] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.607 [2024-07-23 14:11:35.390297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.390304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.390310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.390326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.400178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.400298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.400314] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.400321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.400326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.400347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.410194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.410317] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.410333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.410340] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.410346] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.410362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.420232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.420359] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.420375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.420382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.420387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.420403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.430254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.430376] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.430392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.430399] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.430405] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.430421] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.440309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.440430] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.440446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.440453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.440459] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.440475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.450262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.450385] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.450404] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.450411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.450417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.450433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.460355] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.460479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.460496] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.460502] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.460508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.460525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.470400] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.470524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.470540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.470546] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.470552] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.470569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.480417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.480546] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.480563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.480569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.480576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.480591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.490426] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.490547] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.490564] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.490570] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.490576] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.490596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.500479] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.500602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.500618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.500624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.500630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.500646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.510497] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.510610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.510626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.510633] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.510639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.608 [2024-07-23 14:11:35.510655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.608 qpair failed and we were unable to recover it. 00:29:44.608 [2024-07-23 14:11:35.520522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.608 [2024-07-23 14:11:35.520642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.608 [2024-07-23 14:11:35.520659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.608 [2024-07-23 14:11:35.520666] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.608 [2024-07-23 14:11:35.520671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.520688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.520710] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:44.609 A controller has encountered a failure and is being reset. 00:29:44.609 [2024-07-23 14:11:35.530542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.530661] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.530677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.530684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.530690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.530706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.540577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.540700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.540716] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.540723] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.540729] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.540745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.550605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.550724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.550741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.550747] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.550753] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.550770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.560640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.560762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.560778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.560785] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.560792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.560808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.570722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.570852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.570870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.570878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.570884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.570902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.580794] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.580917] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.580936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.580946] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.580953] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.580969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.590742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.590866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.590884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.590891] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.590897] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.590913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.600692] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.600813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.600830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.600838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.600844] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.600861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.610779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.610910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.610926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.610933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.610939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.610956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.609 [2024-07-23 14:11:35.620822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.609 [2024-07-23 14:11:35.620942] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.609 [2024-07-23 14:11:35.620960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.609 [2024-07-23 14:11:35.620967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.609 [2024-07-23 14:11:35.620973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.609 [2024-07-23 14:11:35.620989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.609 qpair failed and we were unable to recover it. 00:29:44.870 [2024-07-23 14:11:35.630861] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.870 [2024-07-23 14:11:35.630988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.870 [2024-07-23 14:11:35.631005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.870 [2024-07-23 14:11:35.631011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.870 [2024-07-23 14:11:35.631018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.870 [2024-07-23 14:11:35.631034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.870 qpair failed and we were unable to recover it. 00:29:44.870 [2024-07-23 14:11:35.640880] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.870 [2024-07-23 14:11:35.641002] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.870 [2024-07-23 14:11:35.641018] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.870 [2024-07-23 14:11:35.641025] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.870 [2024-07-23 14:11:35.641031] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.870 [2024-07-23 14:11:35.641052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.870 qpair failed and we were unable to recover it. 00:29:44.870 [2024-07-23 14:11:35.650891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.870 [2024-07-23 14:11:35.651012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.870 [2024-07-23 14:11:35.651028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.870 [2024-07-23 14:11:35.651035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.870 [2024-07-23 14:11:35.651041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.870 [2024-07-23 14:11:35.651066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.870 qpair failed and we were unable to recover it. 00:29:44.870 [2024-07-23 14:11:35.660938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.870 [2024-07-23 14:11:35.661066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.870 [2024-07-23 14:11:35.661082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.870 [2024-07-23 14:11:35.661089] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.870 [2024-07-23 14:11:35.661095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.870 [2024-07-23 14:11:35.661112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.870 qpair failed and we were unable to recover it. 00:29:44.870 [2024-07-23 14:11:35.670946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.870 [2024-07-23 14:11:35.671073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.870 [2024-07-23 14:11:35.671092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.870 [2024-07-23 14:11:35.671099] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.870 [2024-07-23 14:11:35.671105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.870 [2024-07-23 14:11:35.671121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.870 qpair failed and we were unable to recover it. 00:29:44.870 [2024-07-23 14:11:35.681031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.870 [2024-07-23 14:11:35.681191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.870 [2024-07-23 14:11:35.681208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.681215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.681220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.681236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.691015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.691141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.691158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.691164] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.691170] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.691186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.700986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.701120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.701137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.701143] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.701149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.701166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.711085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.711202] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.711218] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.711225] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.711231] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.711250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.721115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.721236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.721252] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.721259] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.721265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.721281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.731124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.731246] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.731262] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.731269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.731275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.731291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.741166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.741292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.741308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.741315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.741320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.741337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.751190] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.751312] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.751328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.751335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.751341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.751357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.761178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.761328] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.761349] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.761356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.761362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.761378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.771276] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.771398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.771415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.771421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.771427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.771443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.781261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.781437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.781455] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.781461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.781468] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.781484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.791289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.791440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.791457] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.791464] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.791470] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.791486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.801339] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.801464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.801480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.801487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.801493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.801513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.871 [2024-07-23 14:11:35.811374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.871 [2024-07-23 14:11:35.811507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.871 [2024-07-23 14:11:35.811530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.871 [2024-07-23 14:11:35.811537] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.871 [2024-07-23 14:11:35.811543] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.871 [2024-07-23 14:11:35.811559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.871 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.821323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.821447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.821468] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.821477] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.821484] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.821500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.831399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.831531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.831549] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.831556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.831562] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.831579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.841424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.841576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.841592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.841600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.841606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.841622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.851468] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.851592] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.851613] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.851620] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.851626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.851642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.861455] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.861580] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.861597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.861603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.861609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.861625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.871529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.871645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.871661] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.871668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.871674] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.871689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:44.872 [2024-07-23 14:11:35.881600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:44.872 [2024-07-23 14:11:35.881758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:44.872 [2024-07-23 14:11:35.881774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:44.872 [2024-07-23 14:11:35.881782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:44.872 [2024-07-23 14:11:35.881788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:44.872 [2024-07-23 14:11:35.881804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:44.872 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.891644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.891811] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.891829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.891836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.891845] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.891862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.901645] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.901770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.901787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.901793] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.901800] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.901816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.911659] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.911830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.911847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.911853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.911859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.911876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.921681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.921802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.921818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.921824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.921830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.921846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.931672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.931951] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.931968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.931975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.931981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.931996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.941765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.941909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.941925] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.941932] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.941938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.941955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.951777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.951893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.951910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.951916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.951922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.951938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.961856] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.962016] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.962034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.962041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.962054] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.962071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.133 [2024-07-23 14:11:35.971837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.133 [2024-07-23 14:11:35.971958] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.133 [2024-07-23 14:11:35.971974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.133 [2024-07-23 14:11:35.971981] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.133 [2024-07-23 14:11:35.971987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.133 [2024-07-23 14:11:35.972004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.133 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:35.981891] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:35.982018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:35.982034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:35.982041] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:35.982057] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:35.982074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:35.991921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:35.992060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:35.992077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:35.992084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:35.992090] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:35.992106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.001924] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.002049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.002066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.002073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.002079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.002096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.011917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.012037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.012059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.012066] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.012072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.012088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.021987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.022112] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.022129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.022136] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.022142] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.022158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.032041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.032164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.032181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.032187] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.032193] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.032209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.042085] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.042219] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.042236] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.042243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.042249] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.042265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.052038] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.052168] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.052184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.052191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.052196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.052212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.062137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.062264] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.062281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.062288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.062294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.062310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.072166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.072294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.072313] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.072326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.072332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.072351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.082137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.082424] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.082442] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.082449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.082455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.082472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.092195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.092321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.092338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.092345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.092351] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.092368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.102253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.134 [2024-07-23 14:11:36.102380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.134 [2024-07-23 14:11:36.102397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.134 [2024-07-23 14:11:36.102404] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.134 [2024-07-23 14:11:36.102411] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.134 [2024-07-23 14:11:36.102428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.134 qpair failed and we were unable to recover it. 00:29:45.134 [2024-07-23 14:11:36.112274] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.135 [2024-07-23 14:11:36.112397] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.135 [2024-07-23 14:11:36.112415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.135 [2024-07-23 14:11:36.112423] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.135 [2024-07-23 14:11:36.112429] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.135 [2024-07-23 14:11:36.112445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.135 qpair failed and we were unable to recover it. 00:29:45.135 [2024-07-23 14:11:36.122257] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.135 [2024-07-23 14:11:36.122429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.135 [2024-07-23 14:11:36.122447] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.135 [2024-07-23 14:11:36.122454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.135 [2024-07-23 14:11:36.122460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.135 [2024-07-23 14:11:36.122477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.135 qpair failed and we were unable to recover it. 00:29:45.135 [2024-07-23 14:11:36.132345] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.135 [2024-07-23 14:11:36.132468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.135 [2024-07-23 14:11:36.132484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.135 [2024-07-23 14:11:36.132491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.135 [2024-07-23 14:11:36.132497] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.135 [2024-07-23 14:11:36.132513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.135 qpair failed and we were unable to recover it. 00:29:45.135 [2024-07-23 14:11:36.142358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.135 [2024-07-23 14:11:36.142506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.135 [2024-07-23 14:11:36.142524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.135 [2024-07-23 14:11:36.142530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.135 [2024-07-23 14:11:36.142537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.135 [2024-07-23 14:11:36.142553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.135 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.152326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.152440] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.152456] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.152463] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.152469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.152485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.162443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.162562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.162583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.162590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.162596] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.162612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.172405] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.172565] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.172582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.172588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.172595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.172611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.182464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.182584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.182601] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.182608] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.182613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.182629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.192453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.192576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.192592] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.192599] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.192604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.192621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.202480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.202601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.202617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.202624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.202630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.202646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.212587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.212711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.212728] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.212734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.212740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.212756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.222661] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.222802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.222818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.222825] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.222831] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.222847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.232657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.232777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.232794] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.232801] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.232807] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.232823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.396 [2024-07-23 14:11:36.242684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.396 [2024-07-23 14:11:36.242805] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.396 [2024-07-23 14:11:36.242821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.396 [2024-07-23 14:11:36.242828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.396 [2024-07-23 14:11:36.242834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.396 [2024-07-23 14:11:36.242850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.396 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.252716] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.252835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.252855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.252862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.252868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.252884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.262730] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.262853] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.262870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.262877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.262883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.262899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.272703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.272823] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.272839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.272846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.272852] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.272869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.282811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.282933] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.282949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.282956] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.282962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.282979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.292835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.292957] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.292973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.292980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.292986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.293006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.302866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.302997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.303013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.303020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.303026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.303047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.312899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.313023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.313040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.313052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.313058] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.313074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.322953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.323080] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.323101] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.323110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.323118] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.323136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.332942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.333081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.333098] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.333105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.333111] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.333129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.343009] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.343141] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.343161] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.343168] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.343173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.343190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.353014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.353143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.353160] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.353167] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.353173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.353189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.363008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.363290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.363308] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.363315] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.363321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.363337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.372994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.397 [2024-07-23 14:11:36.373167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.397 [2024-07-23 14:11:36.373184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.397 [2024-07-23 14:11:36.373190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.397 [2024-07-23 14:11:36.373196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.397 [2024-07-23 14:11:36.373213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.397 qpair failed and we were unable to recover it. 00:29:45.397 [2024-07-23 14:11:36.383010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.398 [2024-07-23 14:11:36.383288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.398 [2024-07-23 14:11:36.383306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.398 [2024-07-23 14:11:36.383313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.398 [2024-07-23 14:11:36.383322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.398 [2024-07-23 14:11:36.383338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.398 qpair failed and we were unable to recover it. 00:29:45.398 [2024-07-23 14:11:36.393159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.398 [2024-07-23 14:11:36.393293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.398 [2024-07-23 14:11:36.393310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.398 [2024-07-23 14:11:36.393317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.398 [2024-07-23 14:11:36.393324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.398 [2024-07-23 14:11:36.393340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.398 qpair failed and we were unable to recover it. 00:29:45.398 [2024-07-23 14:11:36.403144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.398 [2024-07-23 14:11:36.403259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.398 [2024-07-23 14:11:36.403275] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.398 [2024-07-23 14:11:36.403282] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.398 [2024-07-23 14:11:36.403288] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.398 [2024-07-23 14:11:36.403304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.398 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.413194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.413351] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.413369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.413375] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.413381] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.413397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.423196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.423321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.423337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.423344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.423350] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.423367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.433238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.433362] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.433378] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.433385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.433391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.433407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.443295] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.443418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.443435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.443442] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.443447] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.443463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.453289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.453407] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.453424] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.453431] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.453437] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.453452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.463315] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.463442] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.463458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.463465] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.463471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.463487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.473325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.473449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.473465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.473472] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.473481] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.473497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.483378] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.483512] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.483529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.483536] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.483542] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.483558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.493389] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.493510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.493527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.659 [2024-07-23 14:11:36.493534] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.659 [2024-07-23 14:11:36.493540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.659 [2024-07-23 14:11:36.493555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.659 qpair failed and we were unable to recover it. 00:29:45.659 [2024-07-23 14:11:36.503440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.659 [2024-07-23 14:11:36.503557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.659 [2024-07-23 14:11:36.503574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.503581] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.503587] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.503602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.513457] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.513618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.513635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.513641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.513647] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.513664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.523418] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.523537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.523554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.523560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.523566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.523582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.533473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.533591] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.533608] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.533615] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.533621] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.533637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.543476] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.543600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.543616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.543623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.543629] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.543645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.553583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.553707] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.553724] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.553730] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.553736] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.553752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.563601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.563758] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.563776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.563786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.563792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.563808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.573654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.573780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.573798] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.573808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.573814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.573831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.583603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.583736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.583755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.583762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.583769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.583785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.593706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.593831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.593849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.593856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.593862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.593878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.603724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.603839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.603855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.603862] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.603868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.603884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.613678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.613797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.613814] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.613821] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.613826] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.613843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.623780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.623906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.623924] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.623931] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.623937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.623953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.660 qpair failed and we were unable to recover it. 00:29:45.660 [2024-07-23 14:11:36.633803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.660 [2024-07-23 14:11:36.633919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.660 [2024-07-23 14:11:36.633935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.660 [2024-07-23 14:11:36.633942] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.660 [2024-07-23 14:11:36.633948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.660 [2024-07-23 14:11:36.633965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.661 qpair failed and we were unable to recover it. 00:29:45.661 [2024-07-23 14:11:36.643810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.661 [2024-07-23 14:11:36.643929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.661 [2024-07-23 14:11:36.643946] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.661 [2024-07-23 14:11:36.643952] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.661 [2024-07-23 14:11:36.643958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.661 [2024-07-23 14:11:36.643974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.661 qpair failed and we were unable to recover it. 00:29:45.661 [2024-07-23 14:11:36.653849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.661 [2024-07-23 14:11:36.653969] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.661 [2024-07-23 14:11:36.653985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.661 [2024-07-23 14:11:36.653995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.661 [2024-07-23 14:11:36.654001] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.661 [2024-07-23 14:11:36.654017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.661 qpair failed and we were unable to recover it. 00:29:45.661 [2024-07-23 14:11:36.663941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.661 [2024-07-23 14:11:36.664069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.661 [2024-07-23 14:11:36.664086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.661 [2024-07-23 14:11:36.664093] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.661 [2024-07-23 14:11:36.664099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.661 [2024-07-23 14:11:36.664114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.661 qpair failed and we were unable to recover it. 00:29:45.661 [2024-07-23 14:11:36.673925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.661 [2024-07-23 14:11:36.674049] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.661 [2024-07-23 14:11:36.674066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.661 [2024-07-23 14:11:36.674073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.661 [2024-07-23 14:11:36.674079] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.661 [2024-07-23 14:11:36.674095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.661 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.683957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.684083] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.684099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.684106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.684112] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.684128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.693969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.694102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.694119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.694126] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.694132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.694147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.703946] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.704076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.704093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.704100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.704106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.704123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.714062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.714180] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.714196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.714203] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.714209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.714226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.724075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.724229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.724246] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.724252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.724258] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.724275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.734101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.734242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.734258] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.734265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.734271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.734287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.744135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.744255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.744274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.744281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.744287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.744304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.754167] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.754290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.754306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.754313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.754318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.754334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.764210] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.764341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.764358] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.764364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.764370] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.764386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.774197] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.774316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.774333] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.774339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.774345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.774361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.784312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.784431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.784448] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.784454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.784460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.784480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.794341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.794466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.794483] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.922 [2024-07-23 14:11:36.794490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.922 [2024-07-23 14:11:36.794496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.922 [2024-07-23 14:11:36.794512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.922 qpair failed and we were unable to recover it. 00:29:45.922 [2024-07-23 14:11:36.804308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.922 [2024-07-23 14:11:36.804429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.922 [2024-07-23 14:11:36.804446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.804452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.804458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.804474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.814337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.814460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.814477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.814484] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.814490] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.814505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.824381] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.824501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.824520] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.824530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.824537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.824554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.834467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.834601] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.834621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.834628] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.834633] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.834650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.844410] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.844531] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.844548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.844555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.844561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.844577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.854453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.854576] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.854593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.854600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.854606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.854622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.864496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.864623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.864640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.864646] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.864652] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.864668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.874526] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.874685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.874702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.874708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.874714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.874733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.884545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.884662] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.884679] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.884686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.884691] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.884708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.894568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.894689] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.894706] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.894713] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.894719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.894734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.904657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.904789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.904806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.904813] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.904819] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.904834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.914616] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.914740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.914757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.914763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.914769] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.914785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.924660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.924783] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.924799] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.924806] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.924811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.924827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.923 qpair failed and we were unable to recover it. 00:29:45.923 [2024-07-23 14:11:36.934668] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:45.923 [2024-07-23 14:11:36.934829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:45.923 [2024-07-23 14:11:36.934845] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:45.923 [2024-07-23 14:11:36.934851] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:45.923 [2024-07-23 14:11:36.934857] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:45.923 [2024-07-23 14:11:36.934873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:45.924 qpair failed and we were unable to recover it. 00:29:46.184 [2024-07-23 14:11:36.944721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.184 [2024-07-23 14:11:36.944846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:36.944862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:36.944869] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:36.944875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:36.944891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:36.954750] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:36.954875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:36.954893] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:36.954900] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:36.954906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:36.954923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:36.964765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:36.964886] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:36.964903] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:36.964910] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:36.964919] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:36.964936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:36.974787] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:36.974911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:36.974927] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:36.974933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:36.974939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:36.974955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:36.984836] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:36.984968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:36.984985] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:36.984992] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:36.984998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:36.985014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:36.994854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:36.994988] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:36.995004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:36.995011] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:36.995017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:36.995033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.004890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.005013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.005030] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.005037] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.005048] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.005065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.014890] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.015025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.015047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.015055] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.015061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.015078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.024877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.024997] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.025013] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.025020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.025026] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.025049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.034968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.035104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.035122] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.035129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.035135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.035151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.045026] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.045165] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.045184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.045191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.045198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.045214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.055014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.055145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.055162] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.055172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.055178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.055194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.065061] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.065189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.065206] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.185 [2024-07-23 14:11:37.065213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.185 [2024-07-23 14:11:37.065220] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.185 [2024-07-23 14:11:37.065237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.185 qpair failed and we were unable to recover it. 00:29:46.185 [2024-07-23 14:11:37.075080] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.185 [2024-07-23 14:11:37.075207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.185 [2024-07-23 14:11:37.075226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.075238] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.075247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.075264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.085125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.085256] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.085273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.085281] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.085287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.085304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.095149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.095279] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.095298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.095306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.095312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.095331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.105118] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.105248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.105265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.105273] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.105279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.105296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.115219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.115353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.115369] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.115377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.115383] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.115399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.125244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.125367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.125383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.125391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.125398] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.125414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.135266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.135389] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.135405] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.135413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.135419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.135436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.145328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.145452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.145469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.145480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.145487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.145503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.155333] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.155452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.155469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.155476] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.155482] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.155499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.165364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.165488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.165505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.165512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.165519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.165536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.175380] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.175500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.175517] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.175524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.175531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.175547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.185469] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.185600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.185617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.185625] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.185631] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.185647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.186 [2024-07-23 14:11:37.195370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.186 [2024-07-23 14:11:37.195486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.186 [2024-07-23 14:11:37.195503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.186 [2024-07-23 14:11:37.195510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.186 [2024-07-23 14:11:37.195517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.186 [2024-07-23 14:11:37.195533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.186 qpair failed and we were unable to recover it. 00:29:46.447 [2024-07-23 14:11:37.205463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.447 [2024-07-23 14:11:37.205579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.447 [2024-07-23 14:11:37.205596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.447 [2024-07-23 14:11:37.205603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.447 [2024-07-23 14:11:37.205609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.447 [2024-07-23 14:11:37.205626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.447 qpair failed and we were unable to recover it. 00:29:46.447 [2024-07-23 14:11:37.215475] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.447 [2024-07-23 14:11:37.215600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.447 [2024-07-23 14:11:37.215618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.447 [2024-07-23 14:11:37.215626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.447 [2024-07-23 14:11:37.215634] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.447 [2024-07-23 14:11:37.215652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.447 qpair failed and we were unable to recover it. 00:29:46.447 [2024-07-23 14:11:37.225506] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.447 [2024-07-23 14:11:37.225633] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.447 [2024-07-23 14:11:37.225651] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.447 [2024-07-23 14:11:37.225661] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.225668] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.225685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.235545] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.235668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.235689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.235696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.235704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.235720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.245596] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.245724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.245741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.245748] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.245754] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.245771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.255609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.255734] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.255751] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.255759] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.255765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.255782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.265649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.265770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.265788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.265795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.265802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.265819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.275703] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.275824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.275841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.275849] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.275855] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.275874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.285629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.285749] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.285766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.285774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.285781] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.285797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.295776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.295896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.295913] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.295921] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.295927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.295944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.305811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.305937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.305954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.305962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.305969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.305985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.315797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.315925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.315941] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.315949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.315955] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.315971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.325818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.325943] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.325967] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.325978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.325986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.326004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.335857] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.335989] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.336007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.336014] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.336020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.336037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.345877] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.346003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.346019] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.346027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.346033] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.448 [2024-07-23 14:11:37.346057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.448 qpair failed and we were unable to recover it. 00:29:46.448 [2024-07-23 14:11:37.355914] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.448 [2024-07-23 14:11:37.356037] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.448 [2024-07-23 14:11:37.356060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.448 [2024-07-23 14:11:37.356068] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.448 [2024-07-23 14:11:37.356075] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.356092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.365948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.366077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.366096] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.366104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.366110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.366133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.376117] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.376244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.376261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.376268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.376274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.376291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.386005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.386292] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.386310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.386317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.386324] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.386341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.395948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.396078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.396095] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.396102] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.396108] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.396126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.406073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.406195] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.406211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.406218] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.406225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.406241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.416092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.416213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.416233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.416241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.416247] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.416264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.426128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.426250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.426267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.426275] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.426282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.426298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.436161] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.436283] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.436300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.436308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.436314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.436331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.446246] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.446377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.446394] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.446402] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.446408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.446424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.449 [2024-07-23 14:11:37.456180] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.449 [2024-07-23 14:11:37.456307] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.449 [2024-07-23 14:11:37.456324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.449 [2024-07-23 14:11:37.456332] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.449 [2024-07-23 14:11:37.456341] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.449 [2024-07-23 14:11:37.456357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.449 qpair failed and we were unable to recover it. 00:29:46.710 [2024-07-23 14:11:37.466266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.710 [2024-07-23 14:11:37.466429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.710 [2024-07-23 14:11:37.466446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.710 [2024-07-23 14:11:37.466453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.710 [2024-07-23 14:11:37.466460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.710 [2024-07-23 14:11:37.466476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.710 qpair failed and we were unable to recover it. 00:29:46.710 [2024-07-23 14:11:37.476287] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.476413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.476430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.476438] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.476444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.476460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.486241] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.486360] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.486377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.486385] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.486391] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.486408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.496334] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.496454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.496471] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.496479] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.496485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.496502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.506354] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.506486] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.506503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.506510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.506516] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.506533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.516416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.516574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.516590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.516598] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.516604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.516620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.526429] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.526552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.526569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.526576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.526582] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.526599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.536485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.536606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.536623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.536631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.536637] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.536653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.546424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.546548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.546566] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.546573] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.546583] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.546599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.556443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.556566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.556583] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.556590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.556597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.556613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.566540] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.566664] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.566681] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.566688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.566694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.566710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.576600] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.576795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.576811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.576818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.576825] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.576841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.586541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.586677] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.586696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.586703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.586709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.586726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.596631] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.596757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.596774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.596782] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.711 [2024-07-23 14:11:37.596788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.711 [2024-07-23 14:11:37.596805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.711 qpair failed and we were unable to recover it. 00:29:46.711 [2024-07-23 14:11:37.606654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.711 [2024-07-23 14:11:37.606778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.711 [2024-07-23 14:11:37.606800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.711 [2024-07-23 14:11:37.606808] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.606814] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.606831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.616681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.616803] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.616821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.616828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.616835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.616852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.626699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.626822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.626839] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.626846] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.626853] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.626870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.636744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.636862] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.636879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.636890] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.636896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.636912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.646769] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.646892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.646909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.646917] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.646923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.646940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.656815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.656939] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.656956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.656963] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.656970] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.656986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.666822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.666949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.666966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.666973] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.666980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.666997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.676859] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.676985] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.677001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.677008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.677014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.677031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.686899] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.687023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.687040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.687052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.687059] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.687075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.696948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.697104] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.697121] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.697129] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.697135] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.697151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.706957] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.707085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.707103] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.707110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.707117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.707134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.712 [2024-07-23 14:11:37.716955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.712 [2024-07-23 14:11:37.717089] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.712 [2024-07-23 14:11:37.717106] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.712 [2024-07-23 14:11:37.717114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.712 [2024-07-23 14:11:37.717120] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.712 [2024-07-23 14:11:37.717137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.712 qpair failed and we were unable to recover it. 00:29:46.973 [2024-07-23 14:11:37.727030] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.973 [2024-07-23 14:11:37.727157] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.973 [2024-07-23 14:11:37.727176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.973 [2024-07-23 14:11:37.727184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.973 [2024-07-23 14:11:37.727190] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.973 [2024-07-23 14:11:37.727206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.973 qpair failed and we were unable to recover it. 00:29:46.973 [2024-07-23 14:11:37.736967] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.973 [2024-07-23 14:11:37.737097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.973 [2024-07-23 14:11:37.737114] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.973 [2024-07-23 14:11:37.737121] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.737127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.737143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.747073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.747244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.747261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.747268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.747274] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.747291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.757081] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.757207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.757225] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.757232] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.757238] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.757254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.767111] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.767244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.767261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.767268] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.767275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.767294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.777124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.777248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.777265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.777272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.777278] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.777295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.787166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.787288] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.787305] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.787312] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.787318] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.787335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.797211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.797335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.797352] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.797360] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.797366] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.797383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.807231] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.807358] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.807376] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.807383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.807389] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.807406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.817262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.817386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.817406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.817413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.817419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.817436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.827299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.827439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.827458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.827467] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.827473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.827491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.837336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.837462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.837480] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.837487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.837493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.837511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.847397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.847672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.847690] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.847698] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.847704] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.847721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.857382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.857505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.857522] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.857530] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.857536] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.857557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.974 qpair failed and we were unable to recover it. 00:29:46.974 [2024-07-23 14:11:37.867419] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.974 [2024-07-23 14:11:37.867543] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.974 [2024-07-23 14:11:37.867561] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.974 [2024-07-23 14:11:37.867568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.974 [2024-07-23 14:11:37.867574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.974 [2024-07-23 14:11:37.867591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.877442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.877564] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.877582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.877589] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.877595] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.877612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.887466] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.887590] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.887606] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.887614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.887620] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.887636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.897480] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.897604] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.897621] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.897629] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.897635] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.897651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.907534] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.907660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.907680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.907688] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.907693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.907710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.917553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.917678] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.917696] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.917703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.917710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.917727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.927585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.927710] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.927727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.927734] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.927740] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.927756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.937593] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.937716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.937733] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.937740] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.937747] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.937762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.947655] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.947791] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.947807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.947814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.947824] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.947840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.957676] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.957799] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.957815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.957822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.957829] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.957845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.967718] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.967849] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.967866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.967873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.967879] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.967896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.977712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.977835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.977852] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.977859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.977865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.977882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:46.975 [2024-07-23 14:11:37.987751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:46.975 [2024-07-23 14:11:37.987880] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:46.975 [2024-07-23 14:11:37.987897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:46.975 [2024-07-23 14:11:37.987904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:46.975 [2024-07-23 14:11:37.987910] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:46.975 [2024-07-23 14:11:37.987926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:46.975 qpair failed and we were unable to recover it. 00:29:47.235 [2024-07-23 14:11:37.997797] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.235 [2024-07-23 14:11:37.997937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.235 [2024-07-23 14:11:37.997954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.235 [2024-07-23 14:11:37.997961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.235 [2024-07-23 14:11:37.997967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.235 [2024-07-23 14:11:37.997984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-07-23 14:11:38.007810] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.235 [2024-07-23 14:11:38.007936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.235 [2024-07-23 14:11:38.007953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.235 [2024-07-23 14:11:38.007960] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.235 [2024-07-23 14:11:38.007967] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.235 [2024-07-23 14:11:38.007983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.235 qpair failed and we were unable to recover it. 00:29:47.235 [2024-07-23 14:11:38.017828] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.235 [2024-07-23 14:11:38.017955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.235 [2024-07-23 14:11:38.017972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.235 [2024-07-23 14:11:38.017980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.017986] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.018003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-07-23 14:11:38.027876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.236 [2024-07-23 14:11:38.027999] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.236 [2024-07-23 14:11:38.028016] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.236 [2024-07-23 14:11:38.028024] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.028030] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.028051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-07-23 14:11:38.037910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.236 [2024-07-23 14:11:38.038030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.236 [2024-07-23 14:11:38.038053] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.236 [2024-07-23 14:11:38.038060] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.038072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.038088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-07-23 14:11:38.047913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.236 [2024-07-23 14:11:38.048036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.236 [2024-07-23 14:11:38.048067] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.236 [2024-07-23 14:11:38.048074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.048080] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.048097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-07-23 14:11:38.057982] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.236 [2024-07-23 14:11:38.058113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.236 [2024-07-23 14:11:38.058129] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.236 [2024-07-23 14:11:38.058137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.058143] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.058159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-07-23 14:11:38.067992] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.236 [2024-07-23 14:11:38.068122] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.236 [2024-07-23 14:11:38.068139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.236 [2024-07-23 14:11:38.068146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.068153] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.068170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 [2024-07-23 14:11:38.078050] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:47.236 [2024-07-23 14:11:38.078182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:47.236 [2024-07-23 14:11:38.078201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:47.236 [2024-07-23 14:11:38.078211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:47.236 [2024-07-23 14:11:38.078218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69c0000b90 00:29:47.236 [2024-07-23 14:11:38.078236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.236 qpair failed and we were unable to recover it. 00:29:47.236 Controller properly reset. 00:29:49.143 Initializing NVMe Controllers 00:29:49.143 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.143 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:49.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:49.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:49.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:49.143 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:49.143 Initialization complete. Launching workers. 00:29:49.144 Starting thread on core 1 00:29:49.144 Starting thread on core 2 00:29:49.144 Starting thread on core 3 00:29:49.144 Starting thread on core 0 00:29:49.144 14:11:39 -- host/target_disconnect.sh@59 -- # sync 00:29:49.144 00:29:49.144 real 0m11.205s 00:29:49.144 user 0m25.273s 00:29:49.144 sys 0m4.392s 00:29:49.144 14:11:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.144 14:11:39 -- common/autotest_common.sh@10 -- # set +x 00:29:49.144 ************************************ 00:29:49.144 END TEST nvmf_target_disconnect_tc2 00:29:49.144 ************************************ 00:29:49.144 14:11:39 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:29:49.144 14:11:39 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:49.144 14:11:39 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:49.144 14:11:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:49.144 14:11:39 -- nvmf/common.sh@116 -- # sync 00:29:49.144 14:11:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:49.144 14:11:39 -- nvmf/common.sh@119 -- # set +e 00:29:49.144 14:11:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:49.144 14:11:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:49.144 rmmod nvme_tcp 00:29:49.144 rmmod nvme_fabrics 00:29:49.144 rmmod nvme_keyring 00:29:49.144 14:11:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:49.144 14:11:39 -- nvmf/common.sh@123 -- # set -e 00:29:49.144 14:11:39 -- nvmf/common.sh@124 -- # return 0 00:29:49.144 14:11:39 -- nvmf/common.sh@477 -- # '[' -n 3436811 ']' 00:29:49.144 14:11:39 -- nvmf/common.sh@478 -- # killprocess 3436811 00:29:49.144 14:11:39 -- common/autotest_common.sh@926 -- # '[' -z 3436811 ']' 00:29:49.144 14:11:39 -- common/autotest_common.sh@930 -- # kill -0 3436811 00:29:49.144 14:11:39 -- common/autotest_common.sh@931 -- # uname 00:29:49.144 14:11:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:49.144 14:11:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3436811 00:29:49.144 14:11:39 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:29:49.144 14:11:39 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:29:49.144 14:11:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3436811' 00:29:49.144 killing process with pid 3436811 00:29:49.144 14:11:39 -- common/autotest_common.sh@945 -- # kill 3436811 00:29:49.144 14:11:39 -- common/autotest_common.sh@950 -- # wait 3436811 00:29:49.403 14:11:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:49.403 14:11:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:49.403 14:11:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:49.403 14:11:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:49.403 14:11:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:49.403 14:11:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.403 14:11:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:49.403 14:11:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.309 14:11:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:51.309 00:29:51.309 real 0m18.575s 00:29:51.309 user 0m51.709s 00:29:51.309 sys 0m8.320s 00:29:51.309 14:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.309 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:51.309 ************************************ 00:29:51.309 END TEST nvmf_target_disconnect 00:29:51.309 ************************************ 00:29:51.309 14:11:42 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:51.309 14:11:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:51.309 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:51.569 14:11:42 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:51.569 00:29:51.569 real 23m11.303s 00:29:51.569 user 62m39.400s 00:29:51.569 sys 5m51.362s 00:29:51.569 14:11:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:51.569 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:51.569 ************************************ 00:29:51.569 END TEST nvmf_tcp 00:29:51.569 ************************************ 00:29:51.569 14:11:42 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:29:51.569 14:11:42 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:51.569 14:11:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:51.569 14:11:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:51.569 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:51.569 ************************************ 00:29:51.569 START TEST spdkcli_nvmf_tcp 00:29:51.569 ************************************ 00:29:51.569 14:11:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:51.569 * Looking for test storage... 00:29:51.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:29:51.569 14:11:42 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:29:51.569 14:11:42 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:51.569 14:11:42 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:29:51.569 14:11:42 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.569 14:11:42 -- nvmf/common.sh@7 -- # uname -s 00:29:51.569 14:11:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.569 14:11:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.569 14:11:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.569 14:11:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.569 14:11:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.569 14:11:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.569 14:11:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.569 14:11:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.569 14:11:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.569 14:11:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.569 14:11:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:51.569 14:11:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:51.570 14:11:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.570 14:11:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.570 14:11:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.570 14:11:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.570 14:11:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.570 14:11:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.570 14:11:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.570 14:11:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.570 14:11:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.570 14:11:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.570 14:11:42 -- paths/export.sh@5 -- # export PATH 00:29:51.570 14:11:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.570 14:11:42 -- nvmf/common.sh@46 -- # : 0 00:29:51.570 14:11:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:51.570 14:11:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:51.570 14:11:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:51.570 14:11:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.570 14:11:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.570 14:11:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:51.570 14:11:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:51.570 14:11:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:51.570 14:11:42 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:51.570 14:11:42 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:51.570 14:11:42 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:51.570 14:11:42 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:51.570 14:11:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:51.570 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:51.570 14:11:42 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:51.570 14:11:42 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3438360 00:29:51.570 14:11:42 -- spdkcli/common.sh@34 -- # waitforlisten 3438360 00:29:51.570 14:11:42 -- common/autotest_common.sh@819 -- # '[' -z 3438360 ']' 00:29:51.570 14:11:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:51.570 14:11:42 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:51.570 14:11:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:51.570 14:11:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:51.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:51.570 14:11:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:51.570 14:11:42 -- common/autotest_common.sh@10 -- # set +x 00:29:51.570 [2024-07-23 14:11:42.537489] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:29:51.570 [2024-07-23 14:11:42.537541] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3438360 ] 00:29:51.570 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.829 [2024-07-23 14:11:42.592829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:51.829 [2024-07-23 14:11:42.671051] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:51.829 [2024-07-23 14:11:42.671178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:51.829 [2024-07-23 14:11:42.671181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:52.398 14:11:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:52.398 14:11:43 -- common/autotest_common.sh@852 -- # return 0 00:29:52.398 14:11:43 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:52.398 14:11:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:52.398 14:11:43 -- common/autotest_common.sh@10 -- # set +x 00:29:52.398 14:11:43 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:52.398 14:11:43 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:52.398 14:11:43 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:52.398 14:11:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:52.398 14:11:43 -- common/autotest_common.sh@10 -- # set +x 00:29:52.398 14:11:43 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:52.398 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:52.398 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:52.398 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:52.398 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:52.398 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:52.398 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:52.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:52.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:52.398 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:52.398 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:52.398 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:52.398 ' 00:29:52.966 [2024-07-23 14:11:43.691745] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:54.871 [2024-07-23 14:11:45.757779] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.250 [2024-07-23 14:11:46.933793] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:58.157 [2024-07-23 14:11:49.100829] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:00.062 [2024-07-23 14:11:50.975037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:01.439 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:01.439 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:01.439 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:01.439 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:01.439 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:01.439 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:01.439 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:01.439 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:01.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:01.439 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:01.440 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:01.440 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:01.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:01.440 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:01.698 14:11:52 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:01.698 14:11:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.698 14:11:52 -- common/autotest_common.sh@10 -- # set +x 00:30:01.698 14:11:52 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:01.698 14:11:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:01.698 14:11:52 -- common/autotest_common.sh@10 -- # set +x 00:30:01.698 14:11:52 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:01.699 14:11:52 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:01.990 14:11:52 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:01.990 14:11:52 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:01.990 14:11:52 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:01.990 14:11:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:01.990 14:11:52 -- common/autotest_common.sh@10 -- # set +x 00:30:01.990 14:11:52 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:01.990 14:11:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:01.990 14:11:52 -- common/autotest_common.sh@10 -- # set +x 00:30:02.250 14:11:52 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:02.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:02.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:02.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:02.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:02.250 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:02.250 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:02.250 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:02.250 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:02.250 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:02.250 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:02.250 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:02.250 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:02.250 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:02.250 ' 00:30:07.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:07.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:07.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:07.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:07.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:07.525 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:07.525 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:07.525 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:07.525 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:07.525 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:07.525 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:07.525 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:07.525 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:07.525 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:07.525 14:11:57 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:07.525 14:11:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:07.525 14:11:57 -- common/autotest_common.sh@10 -- # set +x 00:30:07.525 14:11:57 -- spdkcli/nvmf.sh@90 -- # killprocess 3438360 00:30:07.525 14:11:57 -- common/autotest_common.sh@926 -- # '[' -z 3438360 ']' 00:30:07.525 14:11:57 -- common/autotest_common.sh@930 -- # kill -0 3438360 00:30:07.525 14:11:57 -- common/autotest_common.sh@931 -- # uname 00:30:07.525 14:11:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:07.525 14:11:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3438360 00:30:07.525 14:11:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:07.525 14:11:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:07.525 14:11:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3438360' 00:30:07.525 killing process with pid 3438360 00:30:07.525 14:11:58 -- common/autotest_common.sh@945 -- # kill 3438360 00:30:07.525 [2024-07-23 14:11:58.022308] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:07.525 14:11:58 -- common/autotest_common.sh@950 -- # wait 3438360 00:30:07.525 14:11:58 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:07.525 14:11:58 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:07.525 14:11:58 -- spdkcli/common.sh@13 -- # '[' -n 3438360 ']' 00:30:07.525 14:11:58 -- spdkcli/common.sh@14 -- # killprocess 3438360 00:30:07.525 14:11:58 -- common/autotest_common.sh@926 -- # '[' -z 3438360 ']' 00:30:07.525 14:11:58 -- common/autotest_common.sh@930 -- # kill -0 3438360 00:30:07.525 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3438360) - No such process 00:30:07.525 14:11:58 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3438360 is not found' 00:30:07.525 Process with pid 3438360 is not found 00:30:07.525 14:11:58 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:07.525 14:11:58 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:07.525 14:11:58 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:07.525 00:30:07.525 real 0m15.852s 00:30:07.525 user 0m32.806s 00:30:07.525 sys 0m0.716s 00:30:07.525 14:11:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:07.525 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:30:07.525 ************************************ 00:30:07.525 END TEST spdkcli_nvmf_tcp 00:30:07.525 ************************************ 00:30:07.525 14:11:58 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:07.525 14:11:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:07.525 14:11:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:07.525 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:30:07.525 ************************************ 00:30:07.525 START TEST nvmf_identify_passthru 00:30:07.525 ************************************ 00:30:07.525 14:11:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:07.525 * Looking for test storage... 00:30:07.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:07.525 14:11:58 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.525 14:11:58 -- nvmf/common.sh@7 -- # uname -s 00:30:07.525 14:11:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.525 14:11:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.525 14:11:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.525 14:11:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.525 14:11:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.525 14:11:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.525 14:11:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.526 14:11:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.526 14:11:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.526 14:11:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.526 14:11:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:07.526 14:11:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:07.526 14:11:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.526 14:11:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.526 14:11:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.526 14:11:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.526 14:11:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.526 14:11:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.526 14:11:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.526 14:11:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- paths/export.sh@5 -- # export PATH 00:30:07.526 14:11:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- nvmf/common.sh@46 -- # : 0 00:30:07.526 14:11:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:07.526 14:11:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:07.526 14:11:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:07.526 14:11:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.526 14:11:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.526 14:11:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:07.526 14:11:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:07.526 14:11:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:07.526 14:11:58 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.526 14:11:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.526 14:11:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.526 14:11:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.526 14:11:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- paths/export.sh@5 -- # export PATH 00:30:07.526 14:11:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.526 14:11:58 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:07.526 14:11:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:07.526 14:11:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.526 14:11:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:07.526 14:11:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:07.526 14:11:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:07.526 14:11:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.526 14:11:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:07.526 14:11:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.526 14:11:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:07.526 14:11:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:07.526 14:11:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:07.526 14:11:58 -- common/autotest_common.sh@10 -- # set +x 00:30:12.804 14:12:03 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:12.804 14:12:03 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:12.804 14:12:03 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:12.804 14:12:03 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:12.804 14:12:03 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:12.804 14:12:03 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:12.804 14:12:03 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:12.804 14:12:03 -- nvmf/common.sh@294 -- # net_devs=() 00:30:12.804 14:12:03 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:12.804 14:12:03 -- nvmf/common.sh@295 -- # e810=() 00:30:12.804 14:12:03 -- nvmf/common.sh@295 -- # local -ga e810 00:30:12.804 14:12:03 -- nvmf/common.sh@296 -- # x722=() 00:30:12.804 14:12:03 -- nvmf/common.sh@296 -- # local -ga x722 00:30:12.804 14:12:03 -- nvmf/common.sh@297 -- # mlx=() 00:30:12.804 14:12:03 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:12.804 14:12:03 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:12.804 14:12:03 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:12.804 14:12:03 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:12.804 14:12:03 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:12.804 14:12:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:12.804 14:12:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:12.804 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:12.804 14:12:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:12.804 14:12:03 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:12.804 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:12.804 14:12:03 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:12.804 14:12:03 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:12.804 14:12:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.804 14:12:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:12.804 14:12:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.804 14:12:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:12.804 Found net devices under 0000:86:00.0: cvl_0_0 00:30:12.804 14:12:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.804 14:12:03 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:12.804 14:12:03 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:12.804 14:12:03 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:12.804 14:12:03 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:12.804 14:12:03 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:12.804 Found net devices under 0000:86:00.1: cvl_0_1 00:30:12.804 14:12:03 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:12.804 14:12:03 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:12.804 14:12:03 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:12.804 14:12:03 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:12.804 14:12:03 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:12.804 14:12:03 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.804 14:12:03 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:12.804 14:12:03 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:12.804 14:12:03 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:12.804 14:12:03 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:12.804 14:12:03 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:12.804 14:12:03 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:12.804 14:12:03 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:12.804 14:12:03 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:12.804 14:12:03 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:12.804 14:12:03 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:12.804 14:12:03 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:12.804 14:12:03 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:12.804 14:12:03 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:12.804 14:12:03 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:12.804 14:12:03 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:12.804 14:12:03 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:12.804 14:12:03 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:12.804 14:12:03 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:12.804 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.804 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:30:12.804 00:30:12.804 --- 10.0.0.2 ping statistics --- 00:30:12.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.804 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:30:12.804 14:12:03 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:12.804 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.804 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.354 ms 00:30:12.804 00:30:12.804 --- 10.0.0.1 ping statistics --- 00:30:12.804 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.804 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:30:12.804 14:12:03 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.804 14:12:03 -- nvmf/common.sh@410 -- # return 0 00:30:12.804 14:12:03 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:12.804 14:12:03 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.804 14:12:03 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:12.804 14:12:03 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.804 14:12:03 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:12.804 14:12:03 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:12.804 14:12:03 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:12.804 14:12:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:12.804 14:12:03 -- common/autotest_common.sh@10 -- # set +x 00:30:12.804 14:12:03 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:12.804 14:12:03 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:12.804 14:12:03 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:12.804 14:12:03 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:12.804 14:12:03 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:12.804 14:12:03 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:12.804 14:12:03 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:12.804 14:12:03 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:12.804 14:12:03 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:12.804 14:12:03 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:12.804 14:12:03 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:12.804 14:12:03 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:30:12.804 14:12:03 -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:30:12.804 14:12:03 -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:30:12.804 14:12:03 -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:30:12.804 14:12:03 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:12.804 14:12:03 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:12.804 14:12:03 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:12.804 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.002 14:12:07 -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:30:17.002 14:12:07 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:30:17.002 14:12:07 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:17.002 14:12:07 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:17.002 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.201 14:12:12 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:21.201 14:12:12 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:21.201 14:12:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:21.201 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:30:21.201 14:12:12 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:21.201 14:12:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:21.201 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:30:21.201 14:12:12 -- target/identify_passthru.sh@31 -- # nvmfpid=3445934 00:30:21.201 14:12:12 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:21.201 14:12:12 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:21.201 14:12:12 -- target/identify_passthru.sh@35 -- # waitforlisten 3445934 00:30:21.201 14:12:12 -- common/autotest_common.sh@819 -- # '[' -z 3445934 ']' 00:30:21.201 14:12:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.201 14:12:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:21.201 14:12:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.201 14:12:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:21.201 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:30:21.201 [2024-07-23 14:12:12.125241] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:21.201 [2024-07-23 14:12:12.125287] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.201 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.201 [2024-07-23 14:12:12.181861] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.462 [2024-07-23 14:12:12.262352] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:21.462 [2024-07-23 14:12:12.262460] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.462 [2024-07-23 14:12:12.262468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.462 [2024-07-23 14:12:12.262475] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.462 [2024-07-23 14:12:12.262521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.462 [2024-07-23 14:12:12.262578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.462 [2024-07-23 14:12:12.262663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.462 [2024-07-23 14:12:12.262664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.031 14:12:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:22.031 14:12:12 -- common/autotest_common.sh@852 -- # return 0 00:30:22.031 14:12:12 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:22.031 14:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.031 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:30:22.031 INFO: Log level set to 20 00:30:22.031 INFO: Requests: 00:30:22.031 { 00:30:22.031 "jsonrpc": "2.0", 00:30:22.031 "method": "nvmf_set_config", 00:30:22.031 "id": 1, 00:30:22.031 "params": { 00:30:22.031 "admin_cmd_passthru": { 00:30:22.031 "identify_ctrlr": true 00:30:22.031 } 00:30:22.031 } 00:30:22.031 } 00:30:22.031 00:30:22.031 INFO: response: 00:30:22.031 { 00:30:22.031 "jsonrpc": "2.0", 00:30:22.031 "id": 1, 00:30:22.031 "result": true 00:30:22.031 } 00:30:22.031 00:30:22.031 14:12:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.031 14:12:12 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:22.031 14:12:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.031 14:12:12 -- common/autotest_common.sh@10 -- # set +x 00:30:22.031 INFO: Setting log level to 20 00:30:22.031 INFO: Setting log level to 20 00:30:22.031 INFO: Log level set to 20 00:30:22.031 INFO: Log level set to 20 00:30:22.031 INFO: Requests: 00:30:22.031 { 00:30:22.031 "jsonrpc": "2.0", 00:30:22.031 "method": "framework_start_init", 00:30:22.031 "id": 1 00:30:22.031 } 00:30:22.031 00:30:22.031 INFO: Requests: 00:30:22.031 { 00:30:22.031 "jsonrpc": "2.0", 00:30:22.031 "method": "framework_start_init", 00:30:22.031 "id": 1 00:30:22.031 } 00:30:22.031 00:30:22.032 [2024-07-23 14:12:13.020933] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:22.032 INFO: response: 00:30:22.032 { 00:30:22.032 "jsonrpc": "2.0", 00:30:22.032 "id": 1, 00:30:22.032 "result": true 00:30:22.032 } 00:30:22.032 00:30:22.032 INFO: response: 00:30:22.032 { 00:30:22.032 "jsonrpc": "2.0", 00:30:22.032 "id": 1, 00:30:22.032 "result": true 00:30:22.032 } 00:30:22.032 00:30:22.032 14:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.032 14:12:13 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:22.032 14:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.032 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:30:22.032 INFO: Setting log level to 40 00:30:22.032 INFO: Setting log level to 40 00:30:22.032 INFO: Setting log level to 40 00:30:22.032 [2024-07-23 14:12:13.034392] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.032 14:12:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:22.032 14:12:13 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:22.032 14:12:13 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:22.032 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:30:22.291 14:12:13 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:30:22.291 14:12:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:22.291 14:12:13 -- common/autotest_common.sh@10 -- # set +x 00:30:25.586 Nvme0n1 00:30:25.586 14:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.586 14:12:15 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:25.586 14:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.586 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:30:25.586 14:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.586 14:12:15 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:25.586 14:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.586 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:30:25.586 14:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.586 14:12:15 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:25.586 14:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.586 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:30:25.586 [2024-07-23 14:12:15.927232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.586 14:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.586 14:12:15 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:25.586 14:12:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.586 14:12:15 -- common/autotest_common.sh@10 -- # set +x 00:30:25.586 [2024-07-23 14:12:15.935025] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:25.586 [ 00:30:25.586 { 00:30:25.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:25.586 "subtype": "Discovery", 00:30:25.586 "listen_addresses": [], 00:30:25.586 "allow_any_host": true, 00:30:25.586 "hosts": [] 00:30:25.586 }, 00:30:25.586 { 00:30:25.586 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:25.586 "subtype": "NVMe", 00:30:25.586 "listen_addresses": [ 00:30:25.586 { 00:30:25.586 "transport": "TCP", 00:30:25.586 "trtype": "TCP", 00:30:25.586 "adrfam": "IPv4", 00:30:25.586 "traddr": "10.0.0.2", 00:30:25.586 "trsvcid": "4420" 00:30:25.586 } 00:30:25.586 ], 00:30:25.586 "allow_any_host": true, 00:30:25.586 "hosts": [], 00:30:25.586 "serial_number": "SPDK00000000000001", 00:30:25.586 "model_number": "SPDK bdev Controller", 00:30:25.586 "max_namespaces": 1, 00:30:25.586 "min_cntlid": 1, 00:30:25.586 "max_cntlid": 65519, 00:30:25.586 "namespaces": [ 00:30:25.586 { 00:30:25.586 "nsid": 1, 00:30:25.586 "bdev_name": "Nvme0n1", 00:30:25.586 "name": "Nvme0n1", 00:30:25.586 "nguid": "A3D7F1155665404B84760A0718E8C4ED", 00:30:25.586 "uuid": "a3d7f115-5665-404b-8476-0a0718e8c4ed" 00:30:25.586 } 00:30:25.586 ] 00:30:25.586 } 00:30:25.586 ] 00:30:25.586 14:12:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.586 14:12:15 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:25.586 14:12:15 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:25.586 14:12:15 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:25.586 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.586 14:12:16 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:30:25.586 14:12:16 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:25.586 14:12:16 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:25.586 14:12:16 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:25.586 EAL: No free 2048 kB hugepages reported on node 1 00:30:25.586 14:12:16 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:25.586 14:12:16 -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:30:25.586 14:12:16 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:25.586 14:12:16 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:25.586 14:12:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:25.586 14:12:16 -- common/autotest_common.sh@10 -- # set +x 00:30:25.586 14:12:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:25.586 14:12:16 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:25.586 14:12:16 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:25.586 14:12:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:25.586 14:12:16 -- nvmf/common.sh@116 -- # sync 00:30:25.586 14:12:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:25.586 14:12:16 -- nvmf/common.sh@119 -- # set +e 00:30:25.586 14:12:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:25.586 14:12:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:25.586 rmmod nvme_tcp 00:30:25.586 rmmod nvme_fabrics 00:30:25.586 rmmod nvme_keyring 00:30:25.586 14:12:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:25.586 14:12:16 -- nvmf/common.sh@123 -- # set -e 00:30:25.586 14:12:16 -- nvmf/common.sh@124 -- # return 0 00:30:25.586 14:12:16 -- nvmf/common.sh@477 -- # '[' -n 3445934 ']' 00:30:25.586 14:12:16 -- nvmf/common.sh@478 -- # killprocess 3445934 00:30:25.586 14:12:16 -- common/autotest_common.sh@926 -- # '[' -z 3445934 ']' 00:30:25.586 14:12:16 -- common/autotest_common.sh@930 -- # kill -0 3445934 00:30:25.586 14:12:16 -- common/autotest_common.sh@931 -- # uname 00:30:25.586 14:12:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:25.586 14:12:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3445934 00:30:25.586 14:12:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:25.586 14:12:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:25.586 14:12:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3445934' 00:30:25.586 killing process with pid 3445934 00:30:25.586 14:12:16 -- common/autotest_common.sh@945 -- # kill 3445934 00:30:25.586 [2024-07-23 14:12:16.326970] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:25.586 14:12:16 -- common/autotest_common.sh@950 -- # wait 3445934 00:30:26.966 14:12:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:26.966 14:12:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:26.966 14:12:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:26.966 14:12:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:26.966 14:12:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:26.966 14:12:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:26.966 14:12:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:26.966 14:12:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:28.894 14:12:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:28.894 00:30:28.894 real 0m21.633s 00:30:28.894 user 0m29.683s 00:30:28.894 sys 0m4.682s 00:30:28.894 14:12:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:28.894 14:12:19 -- common/autotest_common.sh@10 -- # set +x 00:30:28.894 ************************************ 00:30:28.894 END TEST nvmf_identify_passthru 00:30:28.894 ************************************ 00:30:29.154 14:12:19 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:29.154 14:12:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:29.154 14:12:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:29.154 14:12:19 -- common/autotest_common.sh@10 -- # set +x 00:30:29.154 ************************************ 00:30:29.154 START TEST nvmf_dif 00:30:29.154 ************************************ 00:30:29.154 14:12:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:29.154 * Looking for test storage... 00:30:29.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:29.154 14:12:20 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:29.154 14:12:20 -- nvmf/common.sh@7 -- # uname -s 00:30:29.154 14:12:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:29.154 14:12:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:29.154 14:12:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:29.154 14:12:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:29.154 14:12:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:29.154 14:12:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:29.154 14:12:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:29.154 14:12:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:29.154 14:12:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:29.154 14:12:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:29.154 14:12:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:29.154 14:12:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:29.154 14:12:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:29.154 14:12:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:29.154 14:12:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:29.154 14:12:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:29.154 14:12:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:29.154 14:12:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:29.154 14:12:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:29.154 14:12:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.154 14:12:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.154 14:12:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.154 14:12:20 -- paths/export.sh@5 -- # export PATH 00:30:29.154 14:12:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:29.154 14:12:20 -- nvmf/common.sh@46 -- # : 0 00:30:29.154 14:12:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:29.154 14:12:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:29.154 14:12:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:29.154 14:12:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:29.154 14:12:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:29.154 14:12:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:29.154 14:12:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:29.154 14:12:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:29.154 14:12:20 -- target/dif.sh@15 -- # NULL_META=16 00:30:29.154 14:12:20 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:29.154 14:12:20 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:29.154 14:12:20 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:29.154 14:12:20 -- target/dif.sh@135 -- # nvmftestinit 00:30:29.154 14:12:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:29.154 14:12:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:29.154 14:12:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:29.154 14:12:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:29.154 14:12:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:29.154 14:12:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:29.154 14:12:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:29.154 14:12:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:29.154 14:12:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:29.154 14:12:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:29.154 14:12:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:29.155 14:12:20 -- common/autotest_common.sh@10 -- # set +x 00:30:34.436 14:12:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:34.436 14:12:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:34.436 14:12:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:34.436 14:12:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:34.436 14:12:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:34.436 14:12:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:34.436 14:12:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:34.436 14:12:25 -- nvmf/common.sh@294 -- # net_devs=() 00:30:34.436 14:12:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:34.436 14:12:25 -- nvmf/common.sh@295 -- # e810=() 00:30:34.436 14:12:25 -- nvmf/common.sh@295 -- # local -ga e810 00:30:34.436 14:12:25 -- nvmf/common.sh@296 -- # x722=() 00:30:34.436 14:12:25 -- nvmf/common.sh@296 -- # local -ga x722 00:30:34.436 14:12:25 -- nvmf/common.sh@297 -- # mlx=() 00:30:34.436 14:12:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:34.436 14:12:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:34.436 14:12:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:34.436 14:12:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:34.436 14:12:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:34.436 14:12:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:34.436 14:12:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:34.436 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:34.436 14:12:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.436 14:12:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:34.437 14:12:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:34.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:34.437 14:12:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:34.437 14:12:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:34.437 14:12:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.437 14:12:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:34.437 14:12:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.437 14:12:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:34.437 Found net devices under 0000:86:00.0: cvl_0_0 00:30:34.437 14:12:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.437 14:12:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:34.437 14:12:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:34.437 14:12:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:34.437 14:12:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:34.437 14:12:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:34.437 Found net devices under 0000:86:00.1: cvl_0_1 00:30:34.437 14:12:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:34.437 14:12:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:34.437 14:12:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:34.437 14:12:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:34.437 14:12:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:34.437 14:12:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:34.437 14:12:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:34.437 14:12:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:34.437 14:12:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:34.437 14:12:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:34.437 14:12:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:34.437 14:12:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:34.437 14:12:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:34.437 14:12:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:34.437 14:12:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:34.437 14:12:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:34.437 14:12:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:34.437 14:12:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:34.437 14:12:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:34.437 14:12:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:34.437 14:12:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:34.437 14:12:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:34.437 14:12:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:34.437 14:12:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:34.437 14:12:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:34.437 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.437 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:30:34.437 00:30:34.437 --- 10.0.0.2 ping statistics --- 00:30:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.437 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:30:34.437 14:12:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:34.437 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.437 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:30:34.437 00:30:34.437 --- 10.0.0.1 ping statistics --- 00:30:34.437 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.437 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:30:34.437 14:12:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.437 14:12:25 -- nvmf/common.sh@410 -- # return 0 00:30:34.437 14:12:25 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:34.437 14:12:25 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:36.975 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:36.975 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:30:36.975 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:30:36.975 14:12:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:36.975 14:12:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:36.975 14:12:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:36.975 14:12:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:36.975 14:12:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:36.975 14:12:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:36.975 14:12:27 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:36.975 14:12:27 -- target/dif.sh@137 -- # nvmfappstart 00:30:36.975 14:12:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:36.975 14:12:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:36.975 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:30:36.975 14:12:27 -- nvmf/common.sh@469 -- # nvmfpid=3451454 00:30:36.975 14:12:27 -- nvmf/common.sh@470 -- # waitforlisten 3451454 00:30:36.975 14:12:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:36.975 14:12:27 -- common/autotest_common.sh@819 -- # '[' -z 3451454 ']' 00:30:36.975 14:12:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:36.975 14:12:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:36.975 14:12:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:36.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:36.975 14:12:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:36.975 14:12:27 -- common/autotest_common.sh@10 -- # set +x 00:30:36.975 [2024-07-23 14:12:27.939938] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:30:36.975 [2024-07-23 14:12:27.939980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:36.975 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.234 [2024-07-23 14:12:27.997177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.234 [2024-07-23 14:12:28.080300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:37.234 [2024-07-23 14:12:28.080405] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.234 [2024-07-23 14:12:28.080413] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.235 [2024-07-23 14:12:28.080420] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.235 [2024-07-23 14:12:28.080438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.803 14:12:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:37.803 14:12:28 -- common/autotest_common.sh@852 -- # return 0 00:30:37.803 14:12:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:37.803 14:12:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:37.803 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:37.803 14:12:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.803 14:12:28 -- target/dif.sh@139 -- # create_transport 00:30:37.803 14:12:28 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:37.803 14:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.803 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:37.803 [2024-07-23 14:12:28.784517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:37.803 14:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.803 14:12:28 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:37.803 14:12:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:37.803 14:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:37.803 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:37.803 ************************************ 00:30:37.803 START TEST fio_dif_1_default 00:30:37.803 ************************************ 00:30:37.803 14:12:28 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:37.803 14:12:28 -- target/dif.sh@86 -- # create_subsystems 0 00:30:37.803 14:12:28 -- target/dif.sh@28 -- # local sub 00:30:37.803 14:12:28 -- target/dif.sh@30 -- # for sub in "$@" 00:30:37.803 14:12:28 -- target/dif.sh@31 -- # create_subsystem 0 00:30:37.803 14:12:28 -- target/dif.sh@18 -- # local sub_id=0 00:30:37.803 14:12:28 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:37.803 14:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.803 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:37.803 bdev_null0 00:30:37.803 14:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:37.803 14:12:28 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:37.803 14:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:37.803 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:38.063 14:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.063 14:12:28 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:38.063 14:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.063 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:38.063 14:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.063 14:12:28 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:38.063 14:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:38.063 14:12:28 -- common/autotest_common.sh@10 -- # set +x 00:30:38.063 [2024-07-23 14:12:28.832766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:38.063 14:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:38.063 14:12:28 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:38.063 14:12:28 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:38.063 14:12:28 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:38.063 14:12:28 -- nvmf/common.sh@520 -- # config=() 00:30:38.063 14:12:28 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.063 14:12:28 -- nvmf/common.sh@520 -- # local subsystem config 00:30:38.063 14:12:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:38.063 14:12:28 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.063 14:12:28 -- target/dif.sh@82 -- # gen_fio_conf 00:30:38.063 14:12:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:38.063 { 00:30:38.063 "params": { 00:30:38.063 "name": "Nvme$subsystem", 00:30:38.063 "trtype": "$TEST_TRANSPORT", 00:30:38.063 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:38.063 "adrfam": "ipv4", 00:30:38.063 "trsvcid": "$NVMF_PORT", 00:30:38.063 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:38.063 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:38.063 "hdgst": ${hdgst:-false}, 00:30:38.063 "ddgst": ${ddgst:-false} 00:30:38.063 }, 00:30:38.063 "method": "bdev_nvme_attach_controller" 00:30:38.063 } 00:30:38.063 EOF 00:30:38.063 )") 00:30:38.063 14:12:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:38.063 14:12:28 -- target/dif.sh@54 -- # local file 00:30:38.063 14:12:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:38.063 14:12:28 -- target/dif.sh@56 -- # cat 00:30:38.063 14:12:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:38.063 14:12:28 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.063 14:12:28 -- common/autotest_common.sh@1320 -- # shift 00:30:38.063 14:12:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:38.063 14:12:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.063 14:12:28 -- nvmf/common.sh@542 -- # cat 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.063 14:12:28 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:38.063 14:12:28 -- target/dif.sh@72 -- # (( file <= files )) 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:38.063 14:12:28 -- nvmf/common.sh@544 -- # jq . 00:30:38.063 14:12:28 -- nvmf/common.sh@545 -- # IFS=, 00:30:38.063 14:12:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:38.063 "params": { 00:30:38.063 "name": "Nvme0", 00:30:38.063 "trtype": "tcp", 00:30:38.063 "traddr": "10.0.0.2", 00:30:38.063 "adrfam": "ipv4", 00:30:38.063 "trsvcid": "4420", 00:30:38.063 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:38.063 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:38.063 "hdgst": false, 00:30:38.063 "ddgst": false 00:30:38.063 }, 00:30:38.063 "method": "bdev_nvme_attach_controller" 00:30:38.063 }' 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:38.063 14:12:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:38.063 14:12:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:38.063 14:12:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:38.063 14:12:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:38.063 14:12:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:38.063 14:12:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:38.322 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:38.322 fio-3.35 00:30:38.322 Starting 1 thread 00:30:38.322 EAL: No free 2048 kB hugepages reported on node 1 00:30:38.891 [2024-07-23 14:12:29.729172] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:38.891 [2024-07-23 14:12:29.729214] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:30:48.880 00:30:48.880 filename0: (groupid=0, jobs=1): err= 0: pid=3451878: Tue Jul 23 14:12:39 2024 00:30:48.880 read: IOPS=179, BW=717KiB/s (734kB/s)(7200KiB/10039msec) 00:30:48.880 slat (nsec): min=6008, max=36520, avg=6279.31, stdev=1129.29 00:30:48.880 clat (usec): min=1404, max=44453, avg=22290.41, stdev=20765.10 00:30:48.880 lat (usec): min=1410, max=44490, avg=22296.69, stdev=20765.04 00:30:48.880 clat percentiles (usec): 00:30:48.880 | 1.00th=[ 1418], 5.00th=[ 1418], 10.00th=[ 1434], 20.00th=[ 1434], 00:30:48.880 | 30.00th=[ 1450], 40.00th=[ 1450], 50.00th=[41157], 60.00th=[42730], 00:30:48.880 | 70.00th=[42730], 80.00th=[43254], 90.00th=[43779], 95.00th=[43779], 00:30:48.880 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:30:48.880 | 99.99th=[44303] 00:30:48.880 bw ( KiB/s): min= 670, max= 768, per=100.00%, avg=718.30, stdev=30.39, samples=20 00:30:48.880 iops : min= 167, max= 192, avg=179.55, stdev= 7.64, samples=20 00:30:48.880 lat (msec) : 2=49.78%, 50=50.22% 00:30:48.880 cpu : usr=95.07%, sys=4.68%, ctx=16, majf=0, minf=261 00:30:48.880 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:48.880 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.880 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:48.880 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:48.880 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:48.880 00:30:48.880 Run status group 0 (all jobs): 00:30:48.880 READ: bw=717KiB/s (734kB/s), 717KiB/s-717KiB/s (734kB/s-734kB/s), io=7200KiB (7373kB), run=10039-10039msec 00:30:49.140 14:12:40 -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:49.140 14:12:40 -- target/dif.sh@43 -- # local sub 00:30:49.140 14:12:40 -- target/dif.sh@45 -- # for sub in "$@" 00:30:49.140 14:12:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:49.140 14:12:40 -- target/dif.sh@36 -- # local sub_id=0 00:30:49.140 14:12:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:49.140 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.140 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 00:30:49.400 real 0m11.369s 00:30:49.400 user 0m15.917s 00:30:49.400 sys 0m0.797s 00:30:49.400 14:12:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 ************************************ 00:30:49.400 END TEST fio_dif_1_default 00:30:49.400 ************************************ 00:30:49.400 14:12:40 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:49.400 14:12:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:49.400 14:12:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 ************************************ 00:30:49.400 START TEST fio_dif_1_multi_subsystems 00:30:49.400 ************************************ 00:30:49.400 14:12:40 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:30:49.400 14:12:40 -- target/dif.sh@92 -- # local files=1 00:30:49.400 14:12:40 -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:49.400 14:12:40 -- target/dif.sh@28 -- # local sub 00:30:49.400 14:12:40 -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.400 14:12:40 -- target/dif.sh@31 -- # create_subsystem 0 00:30:49.400 14:12:40 -- target/dif.sh@18 -- # local sub_id=0 00:30:49.400 14:12:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 bdev_null0 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 [2024-07-23 14:12:40.237085] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@30 -- # for sub in "$@" 00:30:49.400 14:12:40 -- target/dif.sh@31 -- # create_subsystem 1 00:30:49.400 14:12:40 -- target/dif.sh@18 -- # local sub_id=1 00:30:49.400 14:12:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 bdev_null1 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.400 14:12:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.400 14:12:40 -- common/autotest_common.sh@10 -- # set +x 00:30:49.400 14:12:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.400 14:12:40 -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:49.400 14:12:40 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:49.400 14:12:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:49.400 14:12:40 -- nvmf/common.sh@520 -- # config=() 00:30:49.400 14:12:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.400 14:12:40 -- nvmf/common.sh@520 -- # local subsystem config 00:30:49.400 14:12:40 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.400 14:12:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:49.400 14:12:40 -- target/dif.sh@82 -- # gen_fio_conf 00:30:49.400 14:12:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:49.400 { 00:30:49.400 "params": { 00:30:49.400 "name": "Nvme$subsystem", 00:30:49.400 "trtype": "$TEST_TRANSPORT", 00:30:49.400 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.400 "adrfam": "ipv4", 00:30:49.400 "trsvcid": "$NVMF_PORT", 00:30:49.400 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.400 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.400 "hdgst": ${hdgst:-false}, 00:30:49.400 "ddgst": ${ddgst:-false} 00:30:49.400 }, 00:30:49.400 "method": "bdev_nvme_attach_controller" 00:30:49.400 } 00:30:49.400 EOF 00:30:49.400 )") 00:30:49.400 14:12:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:49.400 14:12:40 -- target/dif.sh@54 -- # local file 00:30:49.400 14:12:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:49.400 14:12:40 -- target/dif.sh@56 -- # cat 00:30:49.400 14:12:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:49.401 14:12:40 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.401 14:12:40 -- common/autotest_common.sh@1320 -- # shift 00:30:49.401 14:12:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:49.401 14:12:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.401 14:12:40 -- nvmf/common.sh@542 -- # cat 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:49.401 14:12:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.401 14:12:40 -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.401 14:12:40 -- target/dif.sh@73 -- # cat 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:49.401 14:12:40 -- target/dif.sh@72 -- # (( file++ )) 00:30:49.401 14:12:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:49.401 14:12:40 -- target/dif.sh@72 -- # (( file <= files )) 00:30:49.401 14:12:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:49.401 { 00:30:49.401 "params": { 00:30:49.401 "name": "Nvme$subsystem", 00:30:49.401 "trtype": "$TEST_TRANSPORT", 00:30:49.401 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:49.401 "adrfam": "ipv4", 00:30:49.401 "trsvcid": "$NVMF_PORT", 00:30:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:49.401 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:49.401 "hdgst": ${hdgst:-false}, 00:30:49.401 "ddgst": ${ddgst:-false} 00:30:49.401 }, 00:30:49.401 "method": "bdev_nvme_attach_controller" 00:30:49.401 } 00:30:49.401 EOF 00:30:49.401 )") 00:30:49.401 14:12:40 -- nvmf/common.sh@542 -- # cat 00:30:49.401 14:12:40 -- nvmf/common.sh@544 -- # jq . 00:30:49.401 14:12:40 -- nvmf/common.sh@545 -- # IFS=, 00:30:49.401 14:12:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:49.401 "params": { 00:30:49.401 "name": "Nvme0", 00:30:49.401 "trtype": "tcp", 00:30:49.401 "traddr": "10.0.0.2", 00:30:49.401 "adrfam": "ipv4", 00:30:49.401 "trsvcid": "4420", 00:30:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:49.401 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:49.401 "hdgst": false, 00:30:49.401 "ddgst": false 00:30:49.401 }, 00:30:49.401 "method": "bdev_nvme_attach_controller" 00:30:49.401 },{ 00:30:49.401 "params": { 00:30:49.401 "name": "Nvme1", 00:30:49.401 "trtype": "tcp", 00:30:49.401 "traddr": "10.0.0.2", 00:30:49.401 "adrfam": "ipv4", 00:30:49.401 "trsvcid": "4420", 00:30:49.401 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:49.401 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:49.401 "hdgst": false, 00:30:49.401 "ddgst": false 00:30:49.401 }, 00:30:49.401 "method": "bdev_nvme_attach_controller" 00:30:49.401 }' 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:49.401 14:12:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:49.401 14:12:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:49.401 14:12:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:49.401 14:12:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:49.401 14:12:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:49.401 14:12:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:49.661 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:49.661 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:49.661 fio-3.35 00:30:49.661 Starting 2 threads 00:30:49.661 EAL: No free 2048 kB hugepages reported on node 1 00:30:50.230 [2024-07-23 14:12:41.100340] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:50.230 [2024-07-23 14:12:41.100386] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:00.294 00:31:00.294 filename0: (groupid=0, jobs=1): err= 0: pid=3453871: Tue Jul 23 14:12:51 2024 00:31:00.294 read: IOPS=186, BW=746KiB/s (764kB/s)(7472KiB/10010msec) 00:31:00.294 slat (nsec): min=6095, max=37214, avg=7640.66, stdev=2581.66 00:31:00.294 clat (usec): min=814, max=44752, avg=21410.92, stdev=20521.44 00:31:00.294 lat (usec): min=820, max=44789, avg=21418.56, stdev=20521.18 00:31:00.294 clat percentiles (usec): 00:31:00.294 | 1.00th=[ 824], 5.00th=[ 840], 10.00th=[ 840], 20.00th=[ 848], 00:31:00.294 | 30.00th=[ 857], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:31:00.294 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:00.294 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44827], 99.95th=[44827], 00:31:00.294 | 99.99th=[44827] 00:31:00.294 bw ( KiB/s): min= 704, max= 768, per=50.83%, avg=745.60, stdev=31.32, samples=20 00:31:00.294 iops : min= 176, max= 192, avg=186.40, stdev= 7.83, samples=20 00:31:00.294 lat (usec) : 1000=49.89% 00:31:00.294 lat (msec) : 50=50.11% 00:31:00.294 cpu : usr=97.50%, sys=2.21%, ctx=15, majf=0, minf=184 00:31:00.294 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.294 issued rwts: total=1868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.294 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:00.294 filename1: (groupid=0, jobs=1): err= 0: pid=3453872: Tue Jul 23 14:12:51 2024 00:31:00.294 read: IOPS=179, BW=720KiB/s (737kB/s)(7200KiB/10004msec) 00:31:00.294 slat (nsec): min=6133, max=49339, avg=7010.07, stdev=1653.20 00:31:00.294 clat (usec): min=1409, max=43970, avg=22208.63, stdev=20668.63 00:31:00.294 lat (usec): min=1416, max=44001, avg=22215.64, stdev=20668.47 00:31:00.294 clat percentiles (usec): 00:31:00.294 | 1.00th=[ 1418], 5.00th=[ 1434], 10.00th=[ 1434], 20.00th=[ 1450], 00:31:00.294 | 30.00th=[ 1467], 40.00th=[ 1467], 50.00th=[41681], 60.00th=[42206], 00:31:00.294 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43779], 00:31:00.294 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:31:00.294 | 99.99th=[43779] 00:31:00.294 bw ( KiB/s): min= 672, max= 768, per=49.05%, avg=719.16, stdev=30.86, samples=19 00:31:00.294 iops : min= 168, max= 192, avg=179.79, stdev= 7.71, samples=19 00:31:00.294 lat (msec) : 2=49.78%, 50=50.22% 00:31:00.294 cpu : usr=97.68%, sys=2.05%, ctx=15, majf=0, minf=119 00:31:00.294 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:00.294 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.294 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:00.294 issued rwts: total=1800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:00.294 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:00.294 00:31:00.294 Run status group 0 (all jobs): 00:31:00.294 READ: bw=1466KiB/s (1501kB/s), 720KiB/s-746KiB/s (737kB/s-764kB/s), io=14.3MiB (15.0MB), run=10004-10010msec 00:31:00.554 14:12:51 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:00.554 14:12:51 -- target/dif.sh@43 -- # local sub 00:31:00.554 14:12:51 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.554 14:12:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:00.554 14:12:51 -- target/dif.sh@36 -- # local sub_id=0 00:31:00.554 14:12:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:00.554 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.554 14:12:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:00.554 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.554 14:12:51 -- target/dif.sh@45 -- # for sub in "$@" 00:31:00.554 14:12:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:00.554 14:12:51 -- target/dif.sh@36 -- # local sub_id=1 00:31:00.554 14:12:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:00.554 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.554 14:12:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:00.554 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.554 00:31:00.554 real 0m11.246s 00:31:00.554 user 0m26.372s 00:31:00.554 sys 0m0.705s 00:31:00.554 14:12:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 ************************************ 00:31:00.554 END TEST fio_dif_1_multi_subsystems 00:31:00.554 ************************************ 00:31:00.554 14:12:51 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:00.554 14:12:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:00.554 14:12:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 ************************************ 00:31:00.554 START TEST fio_dif_rand_params 00:31:00.554 ************************************ 00:31:00.554 14:12:51 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:00.554 14:12:51 -- target/dif.sh@100 -- # local NULL_DIF 00:31:00.554 14:12:51 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:00.554 14:12:51 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:00.554 14:12:51 -- target/dif.sh@103 -- # bs=128k 00:31:00.554 14:12:51 -- target/dif.sh@103 -- # numjobs=3 00:31:00.554 14:12:51 -- target/dif.sh@103 -- # iodepth=3 00:31:00.554 14:12:51 -- target/dif.sh@103 -- # runtime=5 00:31:00.554 14:12:51 -- target/dif.sh@105 -- # create_subsystems 0 00:31:00.554 14:12:51 -- target/dif.sh@28 -- # local sub 00:31:00.554 14:12:51 -- target/dif.sh@30 -- # for sub in "$@" 00:31:00.554 14:12:51 -- target/dif.sh@31 -- # create_subsystem 0 00:31:00.554 14:12:51 -- target/dif.sh@18 -- # local sub_id=0 00:31:00.554 14:12:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:00.554 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.554 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.554 bdev_null0 00:31:00.555 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.555 14:12:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:00.555 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.555 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.555 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.555 14:12:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:00.555 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.555 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.555 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.555 14:12:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:00.555 14:12:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:00.555 14:12:51 -- common/autotest_common.sh@10 -- # set +x 00:31:00.555 [2024-07-23 14:12:51.514945] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.555 14:12:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:00.555 14:12:51 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:00.555 14:12:51 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:00.555 14:12:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:00.555 14:12:51 -- nvmf/common.sh@520 -- # config=() 00:31:00.555 14:12:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.555 14:12:51 -- nvmf/common.sh@520 -- # local subsystem config 00:31:00.555 14:12:51 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:00.555 14:12:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:00.555 14:12:51 -- target/dif.sh@82 -- # gen_fio_conf 00:31:00.555 14:12:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:00.555 { 00:31:00.555 "params": { 00:31:00.555 "name": "Nvme$subsystem", 00:31:00.555 "trtype": "$TEST_TRANSPORT", 00:31:00.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:00.555 "adrfam": "ipv4", 00:31:00.555 "trsvcid": "$NVMF_PORT", 00:31:00.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:00.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:00.555 "hdgst": ${hdgst:-false}, 00:31:00.555 "ddgst": ${ddgst:-false} 00:31:00.555 }, 00:31:00.555 "method": "bdev_nvme_attach_controller" 00:31:00.555 } 00:31:00.555 EOF 00:31:00.555 )") 00:31:00.555 14:12:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:00.555 14:12:51 -- target/dif.sh@54 -- # local file 00:31:00.555 14:12:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:00.555 14:12:51 -- target/dif.sh@56 -- # cat 00:31:00.555 14:12:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:00.555 14:12:51 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.555 14:12:51 -- common/autotest_common.sh@1320 -- # shift 00:31:00.555 14:12:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:00.555 14:12:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.555 14:12:51 -- nvmf/common.sh@542 -- # cat 00:31:00.555 14:12:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:00.555 14:12:51 -- target/dif.sh@72 -- # (( file <= files )) 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:00.555 14:12:51 -- nvmf/common.sh@544 -- # jq . 00:31:00.555 14:12:51 -- nvmf/common.sh@545 -- # IFS=, 00:31:00.555 14:12:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:00.555 "params": { 00:31:00.555 "name": "Nvme0", 00:31:00.555 "trtype": "tcp", 00:31:00.555 "traddr": "10.0.0.2", 00:31:00.555 "adrfam": "ipv4", 00:31:00.555 "trsvcid": "4420", 00:31:00.555 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:00.555 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:00.555 "hdgst": false, 00:31:00.555 "ddgst": false 00:31:00.555 }, 00:31:00.555 "method": "bdev_nvme_attach_controller" 00:31:00.555 }' 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:00.555 14:12:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:00.555 14:12:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:00.555 14:12:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:00.814 14:12:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:00.814 14:12:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:00.814 14:12:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:00.814 14:12:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:01.072 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:01.072 ... 00:31:01.072 fio-3.35 00:31:01.072 Starting 3 threads 00:31:01.072 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.331 [2024-07-23 14:12:52.309551] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:01.331 [2024-07-23 14:12:52.309596] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:06.599 00:31:06.599 filename0: (groupid=0, jobs=1): err= 0: pid=3455867: Tue Jul 23 14:12:57 2024 00:31:06.599 read: IOPS=270, BW=33.8MiB/s (35.4MB/s)(169MiB/5004msec) 00:31:06.599 slat (nsec): min=6197, max=40187, avg=9062.48, stdev=3154.72 00:31:06.599 clat (usec): min=4038, max=94939, avg=11089.88, stdev=13009.39 00:31:06.599 lat (usec): min=4045, max=94951, avg=11098.94, stdev=13009.72 00:31:06.599 clat percentiles (usec): 00:31:06.599 | 1.00th=[ 4752], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 5669], 00:31:06.599 | 30.00th=[ 6128], 40.00th=[ 6587], 50.00th=[ 7046], 60.00th=[ 7504], 00:31:06.599 | 70.00th=[ 8160], 80.00th=[ 8979], 90.00th=[12780], 95.00th=[49546], 00:31:06.599 | 99.00th=[55313], 99.50th=[56886], 99.90th=[92799], 99.95th=[94897], 00:31:06.599 | 99.99th=[94897] 00:31:06.599 bw ( KiB/s): min=24576, max=41472, per=42.75%, avg=34540.20, stdev=5818.91, samples=10 00:31:06.599 iops : min= 192, max= 324, avg=269.80, stdev=45.50, samples=10 00:31:06.599 lat (msec) : 10=85.72%, 20=5.40%, 50=4.36%, 100=4.51% 00:31:06.599 cpu : usr=95.70%, sys=3.84%, ctx=11, majf=0, minf=98 00:31:06.599 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.599 issued rwts: total=1352,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:06.599 filename0: (groupid=0, jobs=1): err= 0: pid=3455869: Tue Jul 23 14:12:57 2024 00:31:06.599 read: IOPS=91, BW=11.5MiB/s (12.1MB/s)(57.8MiB/5025msec) 00:31:06.599 slat (nsec): min=6140, max=37853, avg=10351.06, stdev=4403.04 00:31:06.599 clat (msec): min=7, max=101, avg=32.61, stdev=22.76 00:31:06.599 lat (msec): min=7, max=101, avg=32.62, stdev=22.76 00:31:06.599 clat percentiles (msec): 00:31:06.599 | 1.00th=[ 9], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:31:06.599 | 30.00th=[ 14], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 54], 00:31:06.599 | 70.00th=[ 56], 80.00th=[ 58], 90.00th=[ 59], 95.00th=[ 61], 00:31:06.599 | 99.00th=[ 100], 99.50th=[ 101], 99.90th=[ 102], 99.95th=[ 102], 00:31:06.599 | 99.99th=[ 102] 00:31:06.599 bw ( KiB/s): min= 7680, max=19456, per=14.54%, avg=11750.40, stdev=3347.53, samples=10 00:31:06.599 iops : min= 60, max= 152, avg=91.80, stdev=26.15, samples=10 00:31:06.599 lat (msec) : 10=3.03%, 20=53.90%, 50=1.30%, 100=41.56%, 250=0.22% 00:31:06.599 cpu : usr=97.27%, sys=2.33%, ctx=36, majf=0, minf=63 00:31:06.599 IO depths : 1=7.6%, 2=92.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.599 issued rwts: total=462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:06.599 filename0: (groupid=0, jobs=1): err= 0: pid=3455870: Tue Jul 23 14:12:57 2024 00:31:06.599 read: IOPS=271, BW=33.9MiB/s (35.5MB/s)(170MiB/5009msec) 00:31:06.599 slat (nsec): min=6162, max=54246, avg=8677.78, stdev=2709.96 00:31:06.599 clat (usec): min=4102, max=57621, avg=11051.40, stdev=12367.41 00:31:06.599 lat (usec): min=4108, max=57652, avg=11060.08, stdev=12367.65 00:31:06.599 clat percentiles (usec): 00:31:06.599 | 1.00th=[ 4752], 5.00th=[ 4948], 10.00th=[ 5276], 20.00th=[ 5735], 00:31:06.599 | 30.00th=[ 6325], 40.00th=[ 6652], 50.00th=[ 7177], 60.00th=[ 7701], 00:31:06.599 | 70.00th=[ 8291], 80.00th=[ 9241], 90.00th=[12387], 95.00th=[49546], 00:31:06.599 | 99.00th=[53216], 99.50th=[53740], 99.90th=[57410], 99.95th=[57410], 00:31:06.599 | 99.99th=[57410] 00:31:06.599 bw ( KiB/s): min=27648, max=49664, per=42.93%, avg=34688.00, stdev=7173.33, samples=10 00:31:06.599 iops : min= 216, max= 388, avg=271.00, stdev=56.04, samples=10 00:31:06.599 lat (msec) : 10=84.98%, 20=6.19%, 50=4.93%, 100=3.90% 00:31:06.599 cpu : usr=95.69%, sys=3.59%, ctx=17, majf=0, minf=197 00:31:06.599 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:06.599 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.599 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.599 issued rwts: total=1358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.599 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:06.599 00:31:06.599 Run status group 0 (all jobs): 00:31:06.599 READ: bw=78.9MiB/s (82.7MB/s), 11.5MiB/s-33.9MiB/s (12.1MB/s-35.5MB/s), io=397MiB (416MB), run=5004-5025msec 00:31:06.858 14:12:57 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:06.859 14:12:57 -- target/dif.sh@43 -- # local sub 00:31:06.859 14:12:57 -- target/dif.sh@45 -- # for sub in "$@" 00:31:06.859 14:12:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:06.859 14:12:57 -- target/dif.sh@36 -- # local sub_id=0 00:31:06.859 14:12:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:06.859 14:12:57 -- target/dif.sh@109 -- # bs=4k 00:31:06.859 14:12:57 -- target/dif.sh@109 -- # numjobs=8 00:31:06.859 14:12:57 -- target/dif.sh@109 -- # iodepth=16 00:31:06.859 14:12:57 -- target/dif.sh@109 -- # runtime= 00:31:06.859 14:12:57 -- target/dif.sh@109 -- # files=2 00:31:06.859 14:12:57 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:06.859 14:12:57 -- target/dif.sh@28 -- # local sub 00:31:06.859 14:12:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.859 14:12:57 -- target/dif.sh@31 -- # create_subsystem 0 00:31:06.859 14:12:57 -- target/dif.sh@18 -- # local sub_id=0 00:31:06.859 14:12:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 bdev_null0 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 [2024-07-23 14:12:57.693513] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.859 14:12:57 -- target/dif.sh@31 -- # create_subsystem 1 00:31:06.859 14:12:57 -- target/dif.sh@18 -- # local sub_id=1 00:31:06.859 14:12:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 bdev_null1 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@30 -- # for sub in "$@" 00:31:06.859 14:12:57 -- target/dif.sh@31 -- # create_subsystem 2 00:31:06.859 14:12:57 -- target/dif.sh@18 -- # local sub_id=2 00:31:06.859 14:12:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 bdev_null2 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:06.859 14:12:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:06.859 14:12:57 -- common/autotest_common.sh@10 -- # set +x 00:31:06.859 14:12:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:06.859 14:12:57 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:06.859 14:12:57 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:06.859 14:12:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:06.859 14:12:57 -- nvmf/common.sh@520 -- # config=() 00:31:06.859 14:12:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.859 14:12:57 -- nvmf/common.sh@520 -- # local subsystem config 00:31:06.859 14:12:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.859 14:12:57 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:06.859 14:12:57 -- target/dif.sh@82 -- # gen_fio_conf 00:31:06.859 14:12:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.859 { 00:31:06.859 "params": { 00:31:06.859 "name": "Nvme$subsystem", 00:31:06.859 "trtype": "$TEST_TRANSPORT", 00:31:06.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.859 "adrfam": "ipv4", 00:31:06.859 "trsvcid": "$NVMF_PORT", 00:31:06.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.859 "hdgst": ${hdgst:-false}, 00:31:06.859 "ddgst": ${ddgst:-false} 00:31:06.859 }, 00:31:06.859 "method": "bdev_nvme_attach_controller" 00:31:06.859 } 00:31:06.859 EOF 00:31:06.859 )") 00:31:06.859 14:12:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:06.859 14:12:57 -- target/dif.sh@54 -- # local file 00:31:06.859 14:12:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:06.859 14:12:57 -- target/dif.sh@56 -- # cat 00:31:06.859 14:12:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:06.859 14:12:57 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.859 14:12:57 -- common/autotest_common.sh@1320 -- # shift 00:31:06.859 14:12:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:06.859 14:12:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.859 14:12:57 -- nvmf/common.sh@542 -- # cat 00:31:06.859 14:12:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.859 14:12:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:06.859 14:12:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:06.859 14:12:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:06.859 14:12:57 -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.859 14:12:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.859 14:12:57 -- target/dif.sh@73 -- # cat 00:31:06.859 14:12:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.859 { 00:31:06.859 "params": { 00:31:06.859 "name": "Nvme$subsystem", 00:31:06.859 "trtype": "$TEST_TRANSPORT", 00:31:06.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.859 "adrfam": "ipv4", 00:31:06.859 "trsvcid": "$NVMF_PORT", 00:31:06.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.859 "hdgst": ${hdgst:-false}, 00:31:06.859 "ddgst": ${ddgst:-false} 00:31:06.859 }, 00:31:06.859 "method": "bdev_nvme_attach_controller" 00:31:06.859 } 00:31:06.859 EOF 00:31:06.859 )") 00:31:06.859 14:12:57 -- nvmf/common.sh@542 -- # cat 00:31:06.859 14:12:57 -- target/dif.sh@72 -- # (( file++ )) 00:31:06.859 14:12:57 -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.859 14:12:57 -- target/dif.sh@73 -- # cat 00:31:06.859 14:12:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:06.859 14:12:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:06.859 { 00:31:06.859 "params": { 00:31:06.859 "name": "Nvme$subsystem", 00:31:06.859 "trtype": "$TEST_TRANSPORT", 00:31:06.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:06.859 "adrfam": "ipv4", 00:31:06.859 "trsvcid": "$NVMF_PORT", 00:31:06.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:06.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:06.859 "hdgst": ${hdgst:-false}, 00:31:06.859 "ddgst": ${ddgst:-false} 00:31:06.859 }, 00:31:06.859 "method": "bdev_nvme_attach_controller" 00:31:06.859 } 00:31:06.859 EOF 00:31:06.859 )") 00:31:06.859 14:12:57 -- target/dif.sh@72 -- # (( file++ )) 00:31:06.859 14:12:57 -- target/dif.sh@72 -- # (( file <= files )) 00:31:06.859 14:12:57 -- nvmf/common.sh@542 -- # cat 00:31:06.859 14:12:57 -- nvmf/common.sh@544 -- # jq . 00:31:06.859 14:12:57 -- nvmf/common.sh@545 -- # IFS=, 00:31:06.859 14:12:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:06.860 "params": { 00:31:06.860 "name": "Nvme0", 00:31:06.860 "trtype": "tcp", 00:31:06.860 "traddr": "10.0.0.2", 00:31:06.860 "adrfam": "ipv4", 00:31:06.860 "trsvcid": "4420", 00:31:06.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:06.860 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:06.860 "hdgst": false, 00:31:06.860 "ddgst": false 00:31:06.860 }, 00:31:06.860 "method": "bdev_nvme_attach_controller" 00:31:06.860 },{ 00:31:06.860 "params": { 00:31:06.860 "name": "Nvme1", 00:31:06.860 "trtype": "tcp", 00:31:06.860 "traddr": "10.0.0.2", 00:31:06.860 "adrfam": "ipv4", 00:31:06.860 "trsvcid": "4420", 00:31:06.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:06.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:06.860 "hdgst": false, 00:31:06.860 "ddgst": false 00:31:06.860 }, 00:31:06.860 "method": "bdev_nvme_attach_controller" 00:31:06.860 },{ 00:31:06.860 "params": { 00:31:06.860 "name": "Nvme2", 00:31:06.860 "trtype": "tcp", 00:31:06.860 "traddr": "10.0.0.2", 00:31:06.860 "adrfam": "ipv4", 00:31:06.860 "trsvcid": "4420", 00:31:06.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:06.860 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:06.860 "hdgst": false, 00:31:06.860 "ddgst": false 00:31:06.860 }, 00:31:06.860 "method": "bdev_nvme_attach_controller" 00:31:06.860 }' 00:31:06.860 14:12:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:06.860 14:12:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:06.860 14:12:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:06.860 14:12:57 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:06.860 14:12:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:06.860 14:12:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:06.860 14:12:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:06.860 14:12:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:06.860 14:12:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:06.860 14:12:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:07.118 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:07.118 ... 00:31:07.118 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:07.118 ... 00:31:07.119 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:07.119 ... 00:31:07.119 fio-3.35 00:31:07.119 Starting 24 threads 00:31:07.119 EAL: No free 2048 kB hugepages reported on node 1 00:31:08.056 [2024-07-23 14:12:58.767522] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:08.056 [2024-07-23 14:12:58.767564] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:18.022 00:31:18.022 filename0: (groupid=0, jobs=1): err= 0: pid=3456938: Tue Jul 23 14:13:08 2024 00:31:18.022 read: IOPS=694, BW=2778KiB/s (2845kB/s)(27.2MiB/10018msec) 00:31:18.022 slat (usec): min=6, max=415, avg=30.65, stdev=20.31 00:31:18.022 clat (usec): min=2239, max=42765, avg=22804.06, stdev=3579.70 00:31:18.022 lat (usec): min=2246, max=42791, avg=22834.71, stdev=3581.02 00:31:18.022 clat percentiles (usec): 00:31:18.022 | 1.00th=[ 5997], 5.00th=[16909], 10.00th=[20317], 20.00th=[21890], 00:31:18.022 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:31:18.022 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[27132], 00:31:18.022 | 99.00th=[33162], 99.50th=[34866], 99.90th=[41681], 99.95th=[42730], 00:31:18.022 | 99.99th=[42730] 00:31:18.022 bw ( KiB/s): min= 2682, max= 3128, per=4.58%, avg=2776.50, stdev=112.76, samples=20 00:31:18.022 iops : min= 670, max= 782, avg=694.10, stdev=28.21, samples=20 00:31:18.022 lat (msec) : 4=0.88%, 10=0.45%, 20=7.32%, 50=91.36% 00:31:18.022 cpu : usr=92.20%, sys=3.62%, ctx=318, majf=0, minf=91 00:31:18.022 IO depths : 1=2.9%, 2=7.6%, 4=20.2%, 8=58.9%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 complete : 0=0.0%, 4=93.4%, 8=1.3%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 issued rwts: total=6958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.022 filename0: (groupid=0, jobs=1): err= 0: pid=3456939: Tue Jul 23 14:13:08 2024 00:31:18.022 read: IOPS=555, BW=2220KiB/s (2274kB/s)(21.7MiB/10016msec) 00:31:18.022 slat (usec): min=6, max=228, avg=29.17, stdev=17.12 00:31:18.022 clat (usec): min=10563, max=65889, avg=28676.32, stdev=6861.83 00:31:18.022 lat (usec): min=10572, max=65913, avg=28705.49, stdev=6860.86 00:31:18.022 clat percentiles (usec): 00:31:18.022 | 1.00th=[17171], 5.00th=[21890], 10.00th=[22676], 20.00th=[23200], 00:31:18.022 | 30.00th=[23987], 40.00th=[24773], 50.00th=[26608], 60.00th=[28705], 00:31:18.022 | 70.00th=[30802], 80.00th=[33817], 90.00th=[39060], 95.00th=[43254], 00:31:18.022 | 99.00th=[46400], 99.50th=[47973], 99.90th=[63177], 99.95th=[65799], 00:31:18.022 | 99.99th=[65799] 00:31:18.022 bw ( KiB/s): min= 1840, max= 2400, per=3.66%, avg=2217.35, stdev=137.22, samples=20 00:31:18.022 iops : min= 460, max= 600, avg=554.30, stdev=34.28, samples=20 00:31:18.022 lat (msec) : 20=2.03%, 50=97.59%, 100=0.38% 00:31:18.022 cpu : usr=89.84%, sys=4.61%, ctx=778, majf=0, minf=67 00:31:18.022 IO depths : 1=0.1%, 2=0.2%, 4=7.7%, 8=77.8%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 complete : 0=0.0%, 4=90.3%, 8=5.7%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 issued rwts: total=5560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.022 filename0: (groupid=0, jobs=1): err= 0: pid=3456940: Tue Jul 23 14:13:08 2024 00:31:18.022 read: IOPS=597, BW=2390KiB/s (2447kB/s)(23.3MiB/10004msec) 00:31:18.022 slat (usec): min=6, max=129, avg=34.39, stdev=23.43 00:31:18.022 clat (usec): min=4252, max=49825, avg=26591.83, stdev=5793.30 00:31:18.022 lat (usec): min=4260, max=49875, avg=26626.21, stdev=5792.96 00:31:18.022 clat percentiles (usec): 00:31:18.022 | 1.00th=[12649], 5.00th=[20579], 10.00th=[21627], 20.00th=[22676], 00:31:18.022 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24511], 60.00th=[26346], 00:31:18.022 | 70.00th=[28967], 80.00th=[31327], 90.00th=[34341], 95.00th=[36963], 00:31:18.022 | 99.00th=[44303], 99.50th=[46400], 99.90th=[49546], 99.95th=[49546], 00:31:18.022 | 99.99th=[50070] 00:31:18.022 bw ( KiB/s): min= 1944, max= 2634, per=3.94%, avg=2383.47, stdev=180.72, samples=19 00:31:18.022 iops : min= 486, max= 658, avg=595.84, stdev=45.14, samples=19 00:31:18.022 lat (msec) : 10=0.30%, 20=3.58%, 50=96.12% 00:31:18.022 cpu : usr=98.81%, sys=0.78%, ctx=37, majf=0, minf=103 00:31:18.022 IO depths : 1=0.3%, 2=0.9%, 4=8.5%, 8=76.5%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 complete : 0=0.0%, 4=90.4%, 8=5.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 issued rwts: total=5977,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.022 filename0: (groupid=0, jobs=1): err= 0: pid=3456941: Tue Jul 23 14:13:08 2024 00:31:18.022 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10010msec) 00:31:18.022 slat (usec): min=6, max=118, avg=33.96, stdev=18.92 00:31:18.022 clat (usec): min=8503, max=59444, avg=23709.33, stdev=3725.31 00:31:18.022 lat (usec): min=8513, max=59460, avg=23743.29, stdev=3722.26 00:31:18.022 clat percentiles (usec): 00:31:18.022 | 1.00th=[13566], 5.00th=[20579], 10.00th=[21365], 20.00th=[22152], 00:31:18.022 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:31:18.022 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25822], 95.00th=[30278], 00:31:18.022 | 99.00th=[39584], 99.50th=[42206], 99.90th=[50594], 99.95th=[59507], 00:31:18.022 | 99.99th=[59507] 00:31:18.022 bw ( KiB/s): min= 2128, max= 2816, per=4.39%, avg=2656.53, stdev=177.17, samples=19 00:31:18.022 iops : min= 532, max= 704, avg=664.11, stdev=44.27, samples=19 00:31:18.022 lat (msec) : 10=0.10%, 20=3.62%, 50=96.03%, 100=0.24% 00:31:18.022 cpu : usr=96.11%, sys=1.78%, ctx=33, majf=0, minf=42 00:31:18.022 IO depths : 1=2.7%, 2=7.4%, 4=21.2%, 8=58.5%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 complete : 0=0.0%, 4=93.5%, 8=1.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 issued rwts: total=6676,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.022 filename0: (groupid=0, jobs=1): err= 0: pid=3456942: Tue Jul 23 14:13:08 2024 00:31:18.022 read: IOPS=657, BW=2629KiB/s (2692kB/s)(25.7MiB/10008msec) 00:31:18.022 slat (nsec): min=6194, max=81709, avg=27459.40, stdev=16502.61 00:31:18.022 clat (usec): min=8908, max=52861, avg=24180.12, stdev=4349.42 00:31:18.022 lat (usec): min=8915, max=52878, avg=24207.58, stdev=4348.68 00:31:18.022 clat percentiles (usec): 00:31:18.022 | 1.00th=[12911], 5.00th=[19268], 10.00th=[21103], 20.00th=[22152], 00:31:18.022 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:31:18.022 | 70.00th=[24249], 80.00th=[25035], 90.00th=[29754], 95.00th=[32900], 00:31:18.022 | 99.00th=[39060], 99.50th=[42206], 99.90th=[52691], 99.95th=[52691], 00:31:18.022 | 99.99th=[52691] 00:31:18.022 bw ( KiB/s): min= 2144, max= 2784, per=4.33%, avg=2620.74, stdev=171.08, samples=19 00:31:18.022 iops : min= 536, max= 696, avg=655.16, stdev=42.77, samples=19 00:31:18.022 lat (msec) : 10=0.23%, 20=6.08%, 50=93.45%, 100=0.24% 00:31:18.022 cpu : usr=98.84%, sys=0.78%, ctx=32, majf=0, minf=63 00:31:18.022 IO depths : 1=0.6%, 2=1.9%, 4=11.4%, 8=72.5%, 16=13.6%, 32=0.0%, >=64=0.0% 00:31:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 complete : 0=0.0%, 4=91.5%, 8=4.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 issued rwts: total=6577,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.022 filename0: (groupid=0, jobs=1): err= 0: pid=3456943: Tue Jul 23 14:13:08 2024 00:31:18.022 read: IOPS=644, BW=2577KiB/s (2639kB/s)(25.2MiB/10014msec) 00:31:18.022 slat (usec): min=6, max=116, avg=29.22, stdev=20.40 00:31:18.022 clat (usec): min=7952, max=45958, avg=24667.79, stdev=4423.75 00:31:18.022 lat (usec): min=7974, max=45973, avg=24697.00, stdev=4422.12 00:31:18.022 clat percentiles (usec): 00:31:18.022 | 1.00th=[13698], 5.00th=[20317], 10.00th=[21365], 20.00th=[22414], 00:31:18.022 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:31:18.022 | 70.00th=[24511], 80.00th=[26346], 90.00th=[31327], 95.00th=[34341], 00:31:18.022 | 99.00th=[38536], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:31:18.022 | 99.99th=[45876] 00:31:18.022 bw ( KiB/s): min= 2396, max= 2704, per=4.25%, avg=2573.90, stdev=89.45, samples=20 00:31:18.022 iops : min= 599, max= 676, avg=643.45, stdev=22.38, samples=20 00:31:18.022 lat (msec) : 10=0.20%, 20=4.45%, 50=95.35% 00:31:18.022 cpu : usr=98.79%, sys=0.81%, ctx=20, majf=0, minf=62 00:31:18.022 IO depths : 1=0.6%, 2=1.5%, 4=9.2%, 8=75.2%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:18.022 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 complete : 0=0.0%, 4=90.5%, 8=5.1%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.022 issued rwts: total=6451,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.022 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename0: (groupid=0, jobs=1): err= 0: pid=3456944: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=691, BW=2767KiB/s (2834kB/s)(27.1MiB/10014msec) 00:31:18.023 slat (usec): min=6, max=100, avg=35.92, stdev=22.26 00:31:18.023 clat (usec): min=9146, max=39527, avg=22831.84, stdev=2806.50 00:31:18.023 lat (usec): min=9154, max=39549, avg=22867.76, stdev=2807.80 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[13829], 5.00th=[17433], 10.00th=[20579], 20.00th=[21890], 00:31:18.023 | 30.00th=[22414], 40.00th=[22676], 50.00th=[23200], 60.00th=[23462], 00:31:18.023 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25297], 00:31:18.023 | 99.00th=[33162], 99.50th=[35914], 99.90th=[38536], 99.95th=[39584], 00:31:18.023 | 99.99th=[39584] 00:31:18.023 bw ( KiB/s): min= 2444, max= 3216, per=4.56%, avg=2764.70, stdev=171.67, samples=20 00:31:18.023 iops : min= 611, max= 804, avg=691.15, stdev=42.93, samples=20 00:31:18.023 lat (msec) : 10=0.06%, 20=7.94%, 50=92.00% 00:31:18.023 cpu : usr=98.86%, sys=0.76%, ctx=12, majf=0, minf=60 00:31:18.023 IO depths : 1=4.8%, 2=9.7%, 4=20.2%, 8=57.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=93.0%, 8=1.8%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=6928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename0: (groupid=0, jobs=1): err= 0: pid=3456945: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=652, BW=2609KiB/s (2671kB/s)(25.6MiB/10030msec) 00:31:18.023 slat (usec): min=6, max=116, avg=26.91, stdev=19.18 00:31:18.023 clat (usec): min=9587, max=57274, avg=24370.02, stdev=4017.59 00:31:18.023 lat (usec): min=9594, max=57300, avg=24396.94, stdev=4016.30 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[15401], 5.00th=[20579], 10.00th=[21627], 20.00th=[22414], 00:31:18.023 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:31:18.023 | 70.00th=[24511], 80.00th=[25035], 90.00th=[28705], 95.00th=[32375], 00:31:18.023 | 99.00th=[38536], 99.50th=[41681], 99.90th=[57410], 99.95th=[57410], 00:31:18.023 | 99.99th=[57410] 00:31:18.023 bw ( KiB/s): min= 2256, max= 2816, per=4.31%, avg=2609.70, stdev=113.19, samples=20 00:31:18.023 iops : min= 564, max= 704, avg=652.40, stdev=28.31, samples=20 00:31:18.023 lat (msec) : 10=0.05%, 20=3.55%, 50=96.16%, 100=0.24% 00:31:18.023 cpu : usr=98.65%, sys=0.95%, ctx=22, majf=0, minf=78 00:31:18.023 IO depths : 1=0.4%, 2=0.9%, 4=7.4%, 8=78.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=89.7%, 8=5.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=6541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename1: (groupid=0, jobs=1): err= 0: pid=3456946: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=617, BW=2470KiB/s (2529kB/s)(24.2MiB/10020msec) 00:31:18.023 slat (nsec): min=6327, max=95517, avg=28865.61, stdev=20707.58 00:31:18.023 clat (usec): min=7032, max=48548, avg=25748.60, stdev=5140.61 00:31:18.023 lat (usec): min=7041, max=48554, avg=25777.47, stdev=5136.99 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[14877], 5.00th=[20841], 10.00th=[21627], 20.00th=[22676], 00:31:18.023 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[24773], 00:31:18.023 | 70.00th=[26084], 80.00th=[29230], 90.00th=[33162], 95.00th=[36963], 00:31:18.023 | 99.00th=[42206], 99.50th=[43254], 99.90th=[47973], 99.95th=[48497], 00:31:18.023 | 99.99th=[48497] 00:31:18.023 bw ( KiB/s): min= 2004, max= 2640, per=4.08%, avg=2468.35, stdev=137.35, samples=20 00:31:18.023 iops : min= 501, max= 660, avg=617.05, stdev=34.36, samples=20 00:31:18.023 lat (msec) : 10=0.23%, 20=3.26%, 50=96.51% 00:31:18.023 cpu : usr=98.86%, sys=0.74%, ctx=16, majf=0, minf=66 00:31:18.023 IO depths : 1=0.3%, 2=0.9%, 4=8.0%, 8=77.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=90.2%, 8=5.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=6187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename1: (groupid=0, jobs=1): err= 0: pid=3456947: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=616, BW=2465KiB/s (2524kB/s)(24.1MiB/10006msec) 00:31:18.023 slat (nsec): min=6044, max=95257, avg=29710.69, stdev=19398.15 00:31:18.023 clat (usec): min=7234, max=59591, avg=25821.03, stdev=5310.02 00:31:18.023 lat (usec): min=7252, max=59608, avg=25850.74, stdev=5307.10 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[13173], 5.00th=[20579], 10.00th=[21627], 20.00th=[22676], 00:31:18.023 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24773], 00:31:18.023 | 70.00th=[26346], 80.00th=[29492], 90.00th=[33424], 95.00th=[36439], 00:31:18.023 | 99.00th=[41681], 99.50th=[43779], 99.90th=[47973], 99.95th=[59507], 00:31:18.023 | 99.99th=[59507] 00:31:18.023 bw ( KiB/s): min= 2128, max= 2608, per=4.05%, avg=2455.74, stdev=106.11, samples=19 00:31:18.023 iops : min= 532, max= 652, avg=613.89, stdev=26.52, samples=19 00:31:18.023 lat (msec) : 10=0.28%, 20=4.01%, 50=95.64%, 100=0.08% 00:31:18.023 cpu : usr=98.67%, sys=0.86%, ctx=89, majf=0, minf=80 00:31:18.023 IO depths : 1=0.1%, 2=0.3%, 4=6.5%, 8=78.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=90.2%, 8=6.3%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=6165,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename1: (groupid=0, jobs=1): err= 0: pid=3456948: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=634, BW=2537KiB/s (2598kB/s)(24.8MiB/10012msec) 00:31:18.023 slat (nsec): min=5953, max=98420, avg=27535.91, stdev=19224.35 00:31:18.023 clat (usec): min=6101, max=57843, avg=25078.31, stdev=4787.68 00:31:18.023 lat (usec): min=6115, max=57859, avg=25105.85, stdev=4786.03 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[13304], 5.00th=[20579], 10.00th=[21627], 20.00th=[22414], 00:31:18.023 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[24249], 00:31:18.023 | 70.00th=[24773], 80.00th=[27132], 90.00th=[31851], 95.00th=[34866], 00:31:18.023 | 99.00th=[41157], 99.50th=[43254], 99.90th=[49021], 99.95th=[57934], 00:31:18.023 | 99.99th=[57934] 00:31:18.023 bw ( KiB/s): min= 2284, max= 2656, per=4.17%, avg=2525.37, stdev=78.06, samples=19 00:31:18.023 iops : min= 571, max= 664, avg=631.32, stdev=19.52, samples=19 00:31:18.023 lat (msec) : 10=0.05%, 20=3.78%, 50=96.09%, 100=0.08% 00:31:18.023 cpu : usr=98.74%, sys=0.87%, ctx=21, majf=0, minf=75 00:31:18.023 IO depths : 1=0.2%, 2=0.5%, 4=6.9%, 8=77.6%, 16=14.9%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=90.2%, 8=6.5%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=6350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename1: (groupid=0, jobs=1): err= 0: pid=3456949: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=572, BW=2291KiB/s (2346kB/s)(22.4MiB/10004msec) 00:31:18.023 slat (nsec): min=6264, max=93531, avg=18517.07, stdev=17838.54 00:31:18.023 clat (usec): min=8494, max=59518, avg=27834.38, stdev=6345.14 00:31:18.023 lat (usec): min=8501, max=59536, avg=27852.90, stdev=6342.43 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[16057], 5.00th=[20841], 10.00th=[22152], 20.00th=[23200], 00:31:18.023 | 30.00th=[23987], 40.00th=[24511], 50.00th=[25822], 60.00th=[27395], 00:31:18.023 | 70.00th=[30016], 80.00th=[32900], 90.00th=[36963], 95.00th=[41157], 00:31:18.023 | 99.00th=[44827], 99.50th=[45876], 99.90th=[59507], 99.95th=[59507], 00:31:18.023 | 99.99th=[59507] 00:31:18.023 bw ( KiB/s): min= 2072, max= 2512, per=3.78%, avg=2287.05, stdev=119.32, samples=19 00:31:18.023 iops : min= 518, max= 628, avg=571.68, stdev=29.78, samples=19 00:31:18.023 lat (msec) : 10=0.17%, 20=3.16%, 50=96.39%, 100=0.28% 00:31:18.023 cpu : usr=98.79%, sys=0.76%, ctx=17, majf=0, minf=97 00:31:18.023 IO depths : 1=0.3%, 2=1.1%, 4=8.2%, 8=76.2%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=90.5%, 8=5.5%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=5729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename1: (groupid=0, jobs=1): err= 0: pid=3456950: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=619, BW=2479KiB/s (2539kB/s)(24.3MiB/10020msec) 00:31:18.023 slat (nsec): min=6276, max=93312, avg=29117.24, stdev=18873.76 00:31:18.023 clat (usec): min=6433, max=46686, avg=25651.33, stdev=5030.99 00:31:18.023 lat (usec): min=6441, max=46694, avg=25680.45, stdev=5028.68 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[12780], 5.00th=[20841], 10.00th=[21890], 20.00th=[22676], 00:31:18.023 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23987], 60.00th=[24511], 00:31:18.023 | 70.00th=[26084], 80.00th=[29754], 90.00th=[32900], 95.00th=[35914], 00:31:18.023 | 99.00th=[39584], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:18.023 | 99.99th=[46924] 00:31:18.023 bw ( KiB/s): min= 2304, max= 2584, per=4.09%, avg=2477.70, stdev=83.74, samples=20 00:31:18.023 iops : min= 576, max= 646, avg=619.40, stdev=20.93, samples=20 00:31:18.023 lat (msec) : 10=0.31%, 20=3.35%, 50=96.35% 00:31:18.023 cpu : usr=98.74%, sys=0.85%, ctx=55, majf=0, minf=71 00:31:18.023 IO depths : 1=0.1%, 2=0.4%, 4=7.6%, 8=77.6%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:18.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 complete : 0=0.0%, 4=90.3%, 8=5.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.023 issued rwts: total=6211,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.023 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.023 filename1: (groupid=0, jobs=1): err= 0: pid=3456951: Tue Jul 23 14:13:08 2024 00:31:18.023 read: IOPS=630, BW=2520KiB/s (2581kB/s)(24.6MiB/10005msec) 00:31:18.023 slat (usec): min=6, max=109, avg=30.69, stdev=21.56 00:31:18.023 clat (usec): min=4520, max=58906, avg=25206.15, stdev=4785.70 00:31:18.023 lat (usec): min=4533, max=58923, avg=25236.85, stdev=4782.47 00:31:18.023 clat percentiles (usec): 00:31:18.023 | 1.00th=[14615], 5.00th=[20841], 10.00th=[21627], 20.00th=[22414], 00:31:18.023 | 30.00th=[22938], 40.00th=[23462], 50.00th=[23725], 60.00th=[24249], 00:31:18.024 | 70.00th=[25297], 80.00th=[28181], 90.00th=[31851], 95.00th=[34866], 00:31:18.024 | 99.00th=[39060], 99.50th=[41157], 99.90th=[58983], 99.95th=[58983], 00:31:18.024 | 99.99th=[58983] 00:31:18.024 bw ( KiB/s): min= 2176, max= 2736, per=4.15%, avg=2512.63, stdev=145.44, samples=19 00:31:18.024 iops : min= 544, max= 684, avg=628.11, stdev=36.36, samples=19 00:31:18.024 lat (msec) : 10=0.32%, 20=2.97%, 50=96.46%, 100=0.25% 00:31:18.024 cpu : usr=98.97%, sys=0.63%, ctx=18, majf=0, minf=49 00:31:18.024 IO depths : 1=0.8%, 2=1.7%, 4=9.4%, 8=74.4%, 16=13.7%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=90.7%, 8=5.5%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=6304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename1: (groupid=0, jobs=1): err= 0: pid=3456952: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=604, BW=2417KiB/s (2475kB/s)(23.6MiB/10021msec) 00:31:18.024 slat (nsec): min=6219, max=97678, avg=20668.51, stdev=18739.03 00:31:18.024 clat (usec): min=8846, max=46001, avg=26371.15, stdev=5631.89 00:31:18.024 lat (usec): min=8861, max=46014, avg=26391.82, stdev=5630.16 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[14484], 5.00th=[19268], 10.00th=[21627], 20.00th=[22676], 00:31:18.024 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24511], 60.00th=[25560], 00:31:18.024 | 70.00th=[27657], 80.00th=[30278], 90.00th=[34341], 95.00th=[37487], 00:31:18.024 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:31:18.024 | 99.99th=[45876] 00:31:18.024 bw ( KiB/s): min= 2160, max= 2584, per=3.99%, avg=2414.95, stdev=122.52, samples=20 00:31:18.024 iops : min= 540, max= 646, avg=603.70, stdev=30.63, samples=20 00:31:18.024 lat (msec) : 10=0.05%, 20=5.63%, 50=94.32% 00:31:18.024 cpu : usr=98.89%, sys=0.68%, ctx=17, majf=0, minf=81 00:31:18.024 IO depths : 1=0.1%, 2=0.5%, 4=6.4%, 8=78.1%, 16=14.9%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=90.1%, 8=6.5%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=6054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename1: (groupid=0, jobs=1): err= 0: pid=3456953: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=624, BW=2500KiB/s (2560kB/s)(24.4MiB/10016msec) 00:31:18.024 slat (usec): min=6, max=122, avg=30.23, stdev=21.12 00:31:18.024 clat (usec): min=7141, max=49240, avg=25449.43, stdev=4829.64 00:31:18.024 lat (usec): min=7156, max=49247, avg=25479.66, stdev=4826.98 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[15139], 5.00th=[20841], 10.00th=[21890], 20.00th=[22676], 00:31:18.024 | 30.00th=[23200], 40.00th=[23462], 50.00th=[23987], 60.00th=[24249], 00:31:18.024 | 70.00th=[25297], 80.00th=[29230], 90.00th=[32637], 95.00th=[34866], 00:31:18.024 | 99.00th=[40633], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:31:18.024 | 99.99th=[49021] 00:31:18.024 bw ( KiB/s): min= 2260, max= 2640, per=4.12%, avg=2497.15, stdev=82.77, samples=20 00:31:18.024 iops : min= 565, max= 660, avg=624.25, stdev=20.70, samples=20 00:31:18.024 lat (msec) : 10=0.13%, 20=3.55%, 50=96.33% 00:31:18.024 cpu : usr=98.69%, sys=0.92%, ctx=16, majf=0, minf=58 00:31:18.024 IO depths : 1=0.1%, 2=0.3%, 4=6.5%, 8=78.5%, 16=14.5%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=89.9%, 8=6.5%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=6259,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename2: (groupid=0, jobs=1): err= 0: pid=3456954: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=675, BW=2701KiB/s (2766kB/s)(26.4MiB/10008msec) 00:31:18.024 slat (usec): min=5, max=106, avg=38.75, stdev=21.09 00:31:18.024 clat (usec): min=10357, max=55617, avg=23319.41, stdev=2536.80 00:31:18.024 lat (usec): min=10366, max=55633, avg=23358.17, stdev=2534.04 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[19268], 5.00th=[20841], 10.00th=[21365], 20.00th=[22152], 00:31:18.024 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:31:18.024 | 70.00th=[23725], 80.00th=[23987], 90.00th=[24511], 95.00th=[25560], 00:31:18.024 | 99.00th=[34341], 99.50th=[35390], 99.90th=[46924], 99.95th=[55313], 00:31:18.024 | 99.99th=[55837] 00:31:18.024 bw ( KiB/s): min= 2272, max= 2816, per=4.44%, avg=2690.21, stdev=155.62, samples=19 00:31:18.024 iops : min= 568, max= 704, avg=672.53, stdev=38.88, samples=19 00:31:18.024 lat (msec) : 20=1.61%, 50=98.31%, 100=0.07% 00:31:18.024 cpu : usr=98.90%, sys=0.71%, ctx=13, majf=0, minf=44 00:31:18.024 IO depths : 1=5.5%, 2=11.0%, 4=23.2%, 8=53.2%, 16=7.1%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=93.7%, 8=0.6%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=6758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename2: (groupid=0, jobs=1): err= 0: pid=3456955: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=596, BW=2384KiB/s (2442kB/s)(23.3MiB/10007msec) 00:31:18.024 slat (usec): min=6, max=105, avg=28.77, stdev=21.40 00:31:18.024 clat (usec): min=7699, max=65453, avg=26687.40, stdev=5786.37 00:31:18.024 lat (usec): min=7715, max=65477, avg=26716.17, stdev=5782.17 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[15401], 5.00th=[21103], 10.00th=[22152], 20.00th=[22938], 00:31:18.024 | 30.00th=[23462], 40.00th=[23987], 50.00th=[24773], 60.00th=[25822], 00:31:18.024 | 70.00th=[27657], 80.00th=[30802], 90.00th=[34866], 95.00th=[38536], 00:31:18.024 | 99.00th=[44827], 99.50th=[46400], 99.90th=[57410], 99.95th=[65274], 00:31:18.024 | 99.99th=[65274] 00:31:18.024 bw ( KiB/s): min= 2048, max= 2592, per=3.93%, avg=2381.05, stdev=134.85, samples=19 00:31:18.024 iops : min= 512, max= 648, avg=595.26, stdev=33.71, samples=19 00:31:18.024 lat (msec) : 10=0.08%, 20=3.60%, 50=96.04%, 100=0.27% 00:31:18.024 cpu : usr=98.96%, sys=0.64%, ctx=13, majf=0, minf=83 00:31:18.024 IO depths : 1=0.3%, 2=0.9%, 4=8.2%, 8=76.5%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=90.4%, 8=5.6%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=5965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename2: (groupid=0, jobs=1): err= 0: pid=3456956: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=619, BW=2478KiB/s (2538kB/s)(24.2MiB/10016msec) 00:31:18.024 slat (usec): min=6, max=110, avg=28.32, stdev=20.73 00:31:18.024 clat (usec): min=7127, max=49018, avg=25677.61, stdev=5122.41 00:31:18.024 lat (usec): min=7135, max=49026, avg=25705.93, stdev=5120.61 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[13698], 5.00th=[20317], 10.00th=[21890], 20.00th=[22676], 00:31:18.024 | 30.00th=[23200], 40.00th=[23725], 50.00th=[23987], 60.00th=[24511], 00:31:18.024 | 70.00th=[26084], 80.00th=[29754], 90.00th=[33424], 95.00th=[35914], 00:31:18.024 | 99.00th=[41681], 99.50th=[44303], 99.90th=[46924], 99.95th=[49021], 00:31:18.024 | 99.99th=[49021] 00:31:18.024 bw ( KiB/s): min= 2228, max= 2560, per=4.09%, avg=2475.95, stdev=69.12, samples=20 00:31:18.024 iops : min= 557, max= 640, avg=618.95, stdev=17.30, samples=20 00:31:18.024 lat (msec) : 10=0.16%, 20=4.24%, 50=95.60% 00:31:18.024 cpu : usr=98.69%, sys=0.90%, ctx=19, majf=0, minf=64 00:31:18.024 IO depths : 1=0.1%, 2=0.3%, 4=6.8%, 8=78.7%, 16=14.1%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=89.9%, 8=6.0%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=6206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename2: (groupid=0, jobs=1): err= 0: pid=3456957: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=612, BW=2451KiB/s (2510kB/s)(24.0MiB/10016msec) 00:31:18.024 slat (usec): min=6, max=178, avg=27.51, stdev=16.49 00:31:18.024 clat (usec): min=8759, max=47757, avg=25968.77, stdev=5181.34 00:31:18.024 lat (usec): min=8768, max=47807, avg=25996.28, stdev=5181.31 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[13960], 5.00th=[21103], 10.00th=[21890], 20.00th=[22938], 00:31:18.024 | 30.00th=[23462], 40.00th=[23725], 50.00th=[24249], 60.00th=[24773], 00:31:18.024 | 70.00th=[26608], 80.00th=[29754], 90.00th=[33817], 95.00th=[36963], 00:31:18.024 | 99.00th=[43254], 99.50th=[44303], 99.90th=[46400], 99.95th=[46400], 00:31:18.024 | 99.99th=[47973] 00:31:18.024 bw ( KiB/s): min= 2176, max= 2792, per=4.04%, avg=2448.55, stdev=139.26, samples=20 00:31:18.024 iops : min= 544, max= 698, avg=612.10, stdev=34.81, samples=20 00:31:18.024 lat (msec) : 10=0.15%, 20=3.19%, 50=96.66% 00:31:18.024 cpu : usr=95.68%, sys=2.11%, ctx=100, majf=0, minf=85 00:31:18.024 IO depths : 1=0.1%, 2=0.5%, 4=6.6%, 8=78.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.024 issued rwts: total=6138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.024 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.024 filename2: (groupid=0, jobs=1): err= 0: pid=3456958: Tue Jul 23 14:13:08 2024 00:31:18.024 read: IOPS=684, BW=2739KiB/s (2805kB/s)(26.8MiB/10016msec) 00:31:18.024 slat (nsec): min=6272, max=97674, avg=29911.82, stdev=20404.71 00:31:18.024 clat (usec): min=2577, max=42275, avg=23129.03, stdev=2791.76 00:31:18.024 lat (usec): min=2588, max=42283, avg=23158.95, stdev=2790.48 00:31:18.024 clat percentiles (usec): 00:31:18.024 | 1.00th=[10945], 5.00th=[20841], 10.00th=[21627], 20.00th=[22152], 00:31:18.024 | 30.00th=[22676], 40.00th=[22938], 50.00th=[23200], 60.00th=[23462], 00:31:18.024 | 70.00th=[23725], 80.00th=[24249], 90.00th=[24773], 95.00th=[25560], 00:31:18.024 | 99.00th=[32637], 99.50th=[34341], 99.90th=[37487], 99.95th=[42206], 00:31:18.024 | 99.99th=[42206] 00:31:18.024 bw ( KiB/s): min= 2560, max= 2896, per=4.52%, avg=2736.50, stdev=81.86, samples=20 00:31:18.024 iops : min= 640, max= 724, avg=684.10, stdev=20.48, samples=20 00:31:18.024 lat (msec) : 4=0.87%, 10=0.06%, 20=1.22%, 50=97.84% 00:31:18.024 cpu : usr=99.02%, sys=0.60%, ctx=18, majf=0, minf=56 00:31:18.024 IO depths : 1=5.5%, 2=11.1%, 4=22.9%, 8=53.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:31:18.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 issued rwts: total=6858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.025 filename2: (groupid=0, jobs=1): err= 0: pid=3456959: Tue Jul 23 14:13:08 2024 00:31:18.025 read: IOPS=650, BW=2600KiB/s (2663kB/s)(25.5MiB/10028msec) 00:31:18.025 slat (nsec): min=6258, max=79498, avg=21358.07, stdev=13821.37 00:31:18.025 clat (usec): min=7790, max=65278, avg=24480.64, stdev=4322.15 00:31:18.025 lat (usec): min=7803, max=65293, avg=24501.99, stdev=4321.60 00:31:18.025 clat percentiles (usec): 00:31:18.025 | 1.00th=[15926], 5.00th=[20841], 10.00th=[21627], 20.00th=[22414], 00:31:18.025 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23725], 60.00th=[23987], 00:31:18.025 | 70.00th=[24249], 80.00th=[25035], 90.00th=[29230], 95.00th=[32637], 00:31:18.025 | 99.00th=[40109], 99.50th=[43779], 99.90th=[65274], 99.95th=[65274], 00:31:18.025 | 99.99th=[65274] 00:31:18.025 bw ( KiB/s): min= 2184, max= 2736, per=4.29%, avg=2600.90, stdev=125.91, samples=20 00:31:18.025 iops : min= 546, max= 684, avg=650.20, stdev=31.48, samples=20 00:31:18.025 lat (msec) : 10=0.05%, 20=3.05%, 50=96.66%, 100=0.25% 00:31:18.025 cpu : usr=98.17%, sys=1.24%, ctx=124, majf=0, minf=58 00:31:18.025 IO depths : 1=0.4%, 2=0.8%, 4=7.1%, 8=78.3%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:18.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 complete : 0=0.0%, 4=89.7%, 8=5.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 issued rwts: total=6519,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.025 filename2: (groupid=0, jobs=1): err= 0: pid=3456960: Tue Jul 23 14:13:08 2024 00:31:18.025 read: IOPS=644, BW=2577KiB/s (2639kB/s)(25.2MiB/10002msec) 00:31:18.025 slat (nsec): min=6411, max=98794, avg=28222.81, stdev=18095.29 00:31:18.025 clat (usec): min=3097, max=61301, avg=24657.85, stdev=5750.21 00:31:18.025 lat (usec): min=3104, max=61350, avg=24686.08, stdev=5749.30 00:31:18.025 clat percentiles (usec): 00:31:18.025 | 1.00th=[ 9634], 5.00th=[16581], 10.00th=[20841], 20.00th=[22152], 00:31:18.025 | 30.00th=[22676], 40.00th=[23200], 50.00th=[23462], 60.00th=[23987], 00:31:18.025 | 70.00th=[24511], 80.00th=[27132], 90.00th=[32637], 95.00th=[36439], 00:31:18.025 | 99.00th=[42206], 99.50th=[44827], 99.90th=[55313], 99.95th=[55313], 00:31:18.025 | 99.99th=[61080] 00:31:18.025 bw ( KiB/s): min= 2040, max= 2800, per=4.23%, avg=2560.95, stdev=190.18, samples=19 00:31:18.025 iops : min= 510, max= 700, avg=640.21, stdev=47.54, samples=19 00:31:18.025 lat (msec) : 4=0.22%, 10=0.88%, 20=7.02%, 50=91.63%, 100=0.25% 00:31:18.025 cpu : usr=98.89%, sys=0.71%, ctx=55, majf=0, minf=86 00:31:18.025 IO depths : 1=0.7%, 2=3.3%, 4=13.8%, 8=68.6%, 16=13.5%, 32=0.0%, >=64=0.0% 00:31:18.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 complete : 0=0.0%, 4=91.5%, 8=4.4%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 issued rwts: total=6443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.025 filename2: (groupid=0, jobs=1): err= 0: pid=3456961: Tue Jul 23 14:13:08 2024 00:31:18.025 read: IOPS=602, BW=2410KiB/s (2467kB/s)(23.5MiB/10005msec) 00:31:18.025 slat (usec): min=6, max=103, avg=27.56, stdev=23.73 00:31:18.025 clat (usec): min=4794, max=64785, avg=26427.88, stdev=5872.16 00:31:18.025 lat (usec): min=4801, max=64797, avg=26455.44, stdev=5870.78 00:31:18.025 clat percentiles (usec): 00:31:18.025 | 1.00th=[13829], 5.00th=[19006], 10.00th=[21627], 20.00th=[22676], 00:31:18.025 | 30.00th=[23200], 40.00th=[23987], 50.00th=[24511], 60.00th=[26084], 00:31:18.025 | 70.00th=[28181], 80.00th=[30802], 90.00th=[34341], 95.00th=[38011], 00:31:18.025 | 99.00th=[44303], 99.50th=[46924], 99.90th=[63177], 99.95th=[64750], 00:31:18.025 | 99.99th=[64750] 00:31:18.025 bw ( KiB/s): min= 2064, max= 2704, per=3.97%, avg=2404.32, stdev=145.04, samples=19 00:31:18.025 iops : min= 516, max= 676, avg=601.00, stdev=36.26, samples=19 00:31:18.025 lat (msec) : 10=0.27%, 20=5.67%, 50=93.70%, 100=0.37% 00:31:18.025 cpu : usr=98.84%, sys=0.77%, ctx=15, majf=0, minf=65 00:31:18.025 IO depths : 1=0.1%, 2=0.3%, 4=4.9%, 8=79.7%, 16=15.0%, 32=0.0%, >=64=0.0% 00:31:18.025 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 complete : 0=0.0%, 4=89.6%, 8=7.2%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:18.025 issued rwts: total=6027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:18.025 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:18.025 00:31:18.025 Run status group 0 (all jobs): 00:31:18.025 READ: bw=59.1MiB/s (62.0MB/s), 2220KiB/s-2778KiB/s (2274kB/s-2845kB/s), io=593MiB (622MB), run=10002-10030msec 00:31:18.285 14:13:09 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:18.285 14:13:09 -- target/dif.sh@43 -- # local sub 00:31:18.285 14:13:09 -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.285 14:13:09 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:18.285 14:13:09 -- target/dif.sh@36 -- # local sub_id=0 00:31:18.285 14:13:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.285 14:13:09 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:18.285 14:13:09 -- target/dif.sh@36 -- # local sub_id=1 00:31:18.285 14:13:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@45 -- # for sub in "$@" 00:31:18.285 14:13:09 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:18.285 14:13:09 -- target/dif.sh@36 -- # local sub_id=2 00:31:18.285 14:13:09 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:18.285 14:13:09 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:18.285 14:13:09 -- target/dif.sh@115 -- # numjobs=2 00:31:18.285 14:13:09 -- target/dif.sh@115 -- # iodepth=8 00:31:18.285 14:13:09 -- target/dif.sh@115 -- # runtime=5 00:31:18.285 14:13:09 -- target/dif.sh@115 -- # files=1 00:31:18.285 14:13:09 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:18.285 14:13:09 -- target/dif.sh@28 -- # local sub 00:31:18.285 14:13:09 -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.285 14:13:09 -- target/dif.sh@31 -- # create_subsystem 0 00:31:18.285 14:13:09 -- target/dif.sh@18 -- # local sub_id=0 00:31:18.285 14:13:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 bdev_null0 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.285 14:13:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:18.285 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.285 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.285 [2024-07-23 14:13:09.260424] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.286 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.286 14:13:09 -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.286 14:13:09 -- target/dif.sh@31 -- # create_subsystem 1 00:31:18.286 14:13:09 -- target/dif.sh@18 -- # local sub_id=1 00:31:18.286 14:13:09 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:18.286 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.286 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.286 bdev_null1 00:31:18.286 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.286 14:13:09 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:18.286 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.286 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.286 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.286 14:13:09 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:18.286 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.286 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.286 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.286 14:13:09 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:18.286 14:13:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:18.286 14:13:09 -- common/autotest_common.sh@10 -- # set +x 00:31:18.286 14:13:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:18.567 14:13:09 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:18.567 14:13:09 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:18.567 14:13:09 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:18.567 14:13:09 -- nvmf/common.sh@520 -- # config=() 00:31:18.567 14:13:09 -- nvmf/common.sh@520 -- # local subsystem config 00:31:18.567 14:13:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:18.567 14:13:09 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.567 14:13:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:18.567 { 00:31:18.567 "params": { 00:31:18.567 "name": "Nvme$subsystem", 00:31:18.567 "trtype": "$TEST_TRANSPORT", 00:31:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.567 "adrfam": "ipv4", 00:31:18.567 "trsvcid": "$NVMF_PORT", 00:31:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.567 "hdgst": ${hdgst:-false}, 00:31:18.567 "ddgst": ${ddgst:-false} 00:31:18.567 }, 00:31:18.567 "method": "bdev_nvme_attach_controller" 00:31:18.567 } 00:31:18.567 EOF 00:31:18.567 )") 00:31:18.567 14:13:09 -- target/dif.sh@82 -- # gen_fio_conf 00:31:18.567 14:13:09 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.567 14:13:09 -- target/dif.sh@54 -- # local file 00:31:18.567 14:13:09 -- target/dif.sh@56 -- # cat 00:31:18.567 14:13:09 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:18.567 14:13:09 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.567 14:13:09 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:18.567 14:13:09 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.567 14:13:09 -- common/autotest_common.sh@1320 -- # shift 00:31:18.567 14:13:09 -- nvmf/common.sh@542 -- # cat 00:31:18.567 14:13:09 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:18.567 14:13:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.567 14:13:09 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.567 14:13:09 -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:18.567 14:13:09 -- target/dif.sh@73 -- # cat 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:18.567 14:13:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:18.567 14:13:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:18.567 { 00:31:18.567 "params": { 00:31:18.567 "name": "Nvme$subsystem", 00:31:18.567 "trtype": "$TEST_TRANSPORT", 00:31:18.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.567 "adrfam": "ipv4", 00:31:18.567 "trsvcid": "$NVMF_PORT", 00:31:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.567 "hdgst": ${hdgst:-false}, 00:31:18.567 "ddgst": ${ddgst:-false} 00:31:18.567 }, 00:31:18.567 "method": "bdev_nvme_attach_controller" 00:31:18.567 } 00:31:18.567 EOF 00:31:18.567 )") 00:31:18.567 14:13:09 -- target/dif.sh@72 -- # (( file++ )) 00:31:18.567 14:13:09 -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.567 14:13:09 -- nvmf/common.sh@542 -- # cat 00:31:18.567 14:13:09 -- nvmf/common.sh@544 -- # jq . 00:31:18.567 14:13:09 -- nvmf/common.sh@545 -- # IFS=, 00:31:18.567 14:13:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:18.567 "params": { 00:31:18.567 "name": "Nvme0", 00:31:18.567 "trtype": "tcp", 00:31:18.567 "traddr": "10.0.0.2", 00:31:18.567 "adrfam": "ipv4", 00:31:18.567 "trsvcid": "4420", 00:31:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.567 "hdgst": false, 00:31:18.567 "ddgst": false 00:31:18.567 }, 00:31:18.567 "method": "bdev_nvme_attach_controller" 00:31:18.567 },{ 00:31:18.567 "params": { 00:31:18.567 "name": "Nvme1", 00:31:18.567 "trtype": "tcp", 00:31:18.567 "traddr": "10.0.0.2", 00:31:18.567 "adrfam": "ipv4", 00:31:18.567 "trsvcid": "4420", 00:31:18.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:18.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:18.567 "hdgst": false, 00:31:18.567 "ddgst": false 00:31:18.567 }, 00:31:18.567 "method": "bdev_nvme_attach_controller" 00:31:18.567 }' 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:18.567 14:13:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:18.567 14:13:09 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:18.567 14:13:09 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:18.567 14:13:09 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:18.567 14:13:09 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:18.567 14:13:09 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.836 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:18.836 ... 00:31:18.836 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:18.836 ... 00:31:18.836 fio-3.35 00:31:18.836 Starting 4 threads 00:31:18.836 EAL: No free 2048 kB hugepages reported on node 1 00:31:19.404 [2024-07-23 14:13:10.281571] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:19.404 [2024-07-23 14:13:10.281616] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:24.677 00:31:24.677 filename0: (groupid=0, jobs=1): err= 0: pid=3458937: Tue Jul 23 14:13:15 2024 00:31:24.677 read: IOPS=2917, BW=22.8MiB/s (23.9MB/s)(114MiB/5002msec) 00:31:24.677 slat (nsec): min=6153, max=32371, avg=8506.09, stdev=2769.43 00:31:24.677 clat (usec): min=1161, max=5187, avg=2720.27, stdev=475.62 00:31:24.677 lat (usec): min=1167, max=5194, avg=2728.78, stdev=475.65 00:31:24.677 clat percentiles (usec): 00:31:24.677 | 1.00th=[ 1745], 5.00th=[ 1975], 10.00th=[ 2114], 20.00th=[ 2311], 00:31:24.677 | 30.00th=[ 2474], 40.00th=[ 2606], 50.00th=[ 2704], 60.00th=[ 2835], 00:31:24.677 | 70.00th=[ 2966], 80.00th=[ 3097], 90.00th=[ 3294], 95.00th=[ 3523], 00:31:24.677 | 99.00th=[ 3982], 99.50th=[ 4178], 99.90th=[ 4621], 99.95th=[ 4752], 00:31:24.677 | 99.99th=[ 5211] 00:31:24.677 bw ( KiB/s): min=22736, max=23984, per=27.40%, avg=23343.80, stdev=375.62, samples=10 00:31:24.677 iops : min= 2842, max= 2998, avg=2917.90, stdev=46.93, samples=10 00:31:24.677 lat (msec) : 2=5.74%, 4=93.35%, 10=0.91% 00:31:24.677 cpu : usr=97.06%, sys=2.60%, ctx=5, majf=0, minf=1 00:31:24.677 IO depths : 1=0.2%, 2=1.4%, 4=65.9%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.677 complete : 0=0.0%, 4=95.7%, 8=4.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.677 issued rwts: total=14592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.677 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:24.677 filename0: (groupid=0, jobs=1): err= 0: pid=3458938: Tue Jul 23 14:13:15 2024 00:31:24.677 read: IOPS=2843, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:31:24.677 slat (nsec): min=6090, max=34649, avg=8530.76, stdev=2804.78 00:31:24.677 clat (usec): min=1183, max=5342, avg=2791.04, stdev=500.85 00:31:24.677 lat (usec): min=1190, max=5366, avg=2799.57, stdev=500.93 00:31:24.677 clat percentiles (usec): 00:31:24.677 | 1.00th=[ 1778], 5.00th=[ 2024], 10.00th=[ 2147], 20.00th=[ 2343], 00:31:24.677 | 30.00th=[ 2507], 40.00th=[ 2671], 50.00th=[ 2769], 60.00th=[ 2900], 00:31:24.677 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3425], 95.00th=[ 3621], 00:31:24.677 | 99.00th=[ 4113], 99.50th=[ 4359], 99.90th=[ 4883], 99.95th=[ 5080], 00:31:24.677 | 99.99th=[ 5276] 00:31:24.677 bw ( KiB/s): min=22256, max=23664, per=26.78%, avg=22817.78, stdev=424.86, samples=9 00:31:24.677 iops : min= 2782, max= 2958, avg=2852.22, stdev=53.11, samples=9 00:31:24.677 lat (msec) : 2=4.30%, 4=94.23%, 10=1.47% 00:31:24.677 cpu : usr=97.40%, sys=2.22%, ctx=7, majf=0, minf=11 00:31:24.677 IO depths : 1=0.1%, 2=1.4%, 4=66.3%, 8=32.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.677 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.677 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.677 issued rwts: total=14223,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.677 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:24.677 filename1: (groupid=0, jobs=1): err= 0: pid=3458939: Tue Jul 23 14:13:15 2024 00:31:24.677 read: IOPS=2092, BW=16.3MiB/s (17.1MB/s)(82.4MiB/5043msec) 00:31:24.677 slat (nsec): min=6126, max=39790, avg=9165.96, stdev=3475.88 00:31:24.677 clat (usec): min=1526, max=47125, avg=3785.93, stdev=3059.25 00:31:24.677 lat (usec): min=1532, max=47152, avg=3795.10, stdev=3059.27 00:31:24.677 clat percentiles (usec): 00:31:24.677 | 1.00th=[ 2212], 5.00th=[ 2507], 10.00th=[ 2769], 20.00th=[ 2999], 00:31:24.677 | 30.00th=[ 3195], 40.00th=[ 3359], 50.00th=[ 3523], 60.00th=[ 3687], 00:31:24.678 | 70.00th=[ 3851], 80.00th=[ 4113], 90.00th=[ 4490], 95.00th=[ 4883], 00:31:24.678 | 99.00th=[ 5735], 99.50th=[42730], 99.90th=[46400], 99.95th=[46924], 00:31:24.678 | 99.99th=[46924] 00:31:24.678 bw ( KiB/s): min=14656, max=18496, per=19.81%, avg=16880.00, stdev=1360.12, samples=10 00:31:24.678 iops : min= 1832, max= 2312, avg=2110.00, stdev=170.01, samples=10 00:31:24.678 lat (msec) : 2=0.23%, 4=75.26%, 10=24.01%, 50=0.50% 00:31:24.678 cpu : usr=93.77%, sys=4.17%, ctx=398, majf=0, minf=9 00:31:24.678 IO depths : 1=0.3%, 2=2.2%, 4=66.9%, 8=30.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.678 complete : 0=0.0%, 4=94.6%, 8=5.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.678 issued rwts: total=10553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.678 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:24.678 filename1: (groupid=0, jobs=1): err= 0: pid=3458940: Tue Jul 23 14:13:15 2024 00:31:24.678 read: IOPS=2867, BW=22.4MiB/s (23.5MB/s)(112MiB/5003msec) 00:31:24.678 slat (nsec): min=6132, max=32150, avg=8678.66, stdev=3163.70 00:31:24.678 clat (usec): min=1225, max=44935, avg=2767.24, stdev=1760.24 00:31:24.678 lat (usec): min=1236, max=44945, avg=2775.92, stdev=1760.27 00:31:24.678 clat percentiles (usec): 00:31:24.678 | 1.00th=[ 1713], 5.00th=[ 1958], 10.00th=[ 2114], 20.00th=[ 2278], 00:31:24.678 | 30.00th=[ 2442], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2802], 00:31:24.678 | 70.00th=[ 2933], 80.00th=[ 3064], 90.00th=[ 3294], 95.00th=[ 3490], 00:31:24.678 | 99.00th=[ 3916], 99.50th=[ 4228], 99.90th=[44303], 99.95th=[44827], 00:31:24.678 | 99.99th=[44827] 00:31:24.678 bw ( KiB/s): min=20688, max=24320, per=26.92%, avg=22940.80, stdev=1135.46, samples=10 00:31:24.678 iops : min= 2586, max= 3040, avg=2867.60, stdev=141.93, samples=10 00:31:24.678 lat (msec) : 2=5.95%, 4=93.29%, 10=0.59%, 50=0.17% 00:31:24.678 cpu : usr=97.00%, sys=2.64%, ctx=16, majf=0, minf=0 00:31:24.678 IO depths : 1=0.2%, 2=1.3%, 4=66.8%, 8=31.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.678 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.678 complete : 0=0.0%, 4=95.3%, 8=4.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.678 issued rwts: total=14345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.678 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:24.678 00:31:24.678 Run status group 0 (all jobs): 00:31:24.678 READ: bw=83.2MiB/s (87.3MB/s), 16.3MiB/s-22.8MiB/s (17.1MB/s-23.9MB/s), io=420MiB (440MB), run=5002-5043msec 00:31:24.937 14:13:15 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:24.937 14:13:15 -- target/dif.sh@43 -- # local sub 00:31:24.937 14:13:15 -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.937 14:13:15 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:24.937 14:13:15 -- target/dif.sh@36 -- # local sub_id=0 00:31:24.937 14:13:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.937 14:13:15 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:24.937 14:13:15 -- target/dif.sh@36 -- # local sub_id=1 00:31:24.937 14:13:15 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 00:31:24.937 real 0m24.286s 00:31:24.937 user 4m50.033s 00:31:24.937 sys 0m4.863s 00:31:24.937 14:13:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 ************************************ 00:31:24.937 END TEST fio_dif_rand_params 00:31:24.937 ************************************ 00:31:24.937 14:13:15 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:24.937 14:13:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:24.937 14:13:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 ************************************ 00:31:24.937 START TEST fio_dif_digest 00:31:24.937 ************************************ 00:31:24.937 14:13:15 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:24.937 14:13:15 -- target/dif.sh@123 -- # local NULL_DIF 00:31:24.937 14:13:15 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:24.937 14:13:15 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:24.937 14:13:15 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:24.937 14:13:15 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:24.937 14:13:15 -- target/dif.sh@127 -- # numjobs=3 00:31:24.937 14:13:15 -- target/dif.sh@127 -- # iodepth=3 00:31:24.937 14:13:15 -- target/dif.sh@127 -- # runtime=10 00:31:24.937 14:13:15 -- target/dif.sh@128 -- # hdgst=true 00:31:24.937 14:13:15 -- target/dif.sh@128 -- # ddgst=true 00:31:24.937 14:13:15 -- target/dif.sh@130 -- # create_subsystems 0 00:31:24.937 14:13:15 -- target/dif.sh@28 -- # local sub 00:31:24.937 14:13:15 -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.937 14:13:15 -- target/dif.sh@31 -- # create_subsystem 0 00:31:24.937 14:13:15 -- target/dif.sh@18 -- # local sub_id=0 00:31:24.937 14:13:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 bdev_null0 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.937 14:13:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:24.937 14:13:15 -- common/autotest_common.sh@10 -- # set +x 00:31:24.937 [2024-07-23 14:13:15.853134] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.937 14:13:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:24.937 14:13:15 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:24.937 14:13:15 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:24.937 14:13:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:24.937 14:13:15 -- nvmf/common.sh@520 -- # config=() 00:31:24.937 14:13:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.937 14:13:15 -- nvmf/common.sh@520 -- # local subsystem config 00:31:24.937 14:13:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:24.937 14:13:15 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.937 14:13:15 -- target/dif.sh@82 -- # gen_fio_conf 00:31:24.937 14:13:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:24.937 { 00:31:24.937 "params": { 00:31:24.937 "name": "Nvme$subsystem", 00:31:24.937 "trtype": "$TEST_TRANSPORT", 00:31:24.937 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.937 "adrfam": "ipv4", 00:31:24.937 "trsvcid": "$NVMF_PORT", 00:31:24.937 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.937 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.937 "hdgst": ${hdgst:-false}, 00:31:24.937 "ddgst": ${ddgst:-false} 00:31:24.937 }, 00:31:24.937 "method": "bdev_nvme_attach_controller" 00:31:24.937 } 00:31:24.937 EOF 00:31:24.937 )") 00:31:24.937 14:13:15 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:24.937 14:13:15 -- target/dif.sh@54 -- # local file 00:31:24.937 14:13:15 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.937 14:13:15 -- target/dif.sh@56 -- # cat 00:31:24.937 14:13:15 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:24.937 14:13:15 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.937 14:13:15 -- common/autotest_common.sh@1320 -- # shift 00:31:24.937 14:13:15 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:24.937 14:13:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.937 14:13:15 -- nvmf/common.sh@542 -- # cat 00:31:24.937 14:13:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.937 14:13:15 -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:24.937 14:13:15 -- nvmf/common.sh@544 -- # jq . 00:31:24.937 14:13:15 -- nvmf/common.sh@545 -- # IFS=, 00:31:24.937 14:13:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:24.937 "params": { 00:31:24.937 "name": "Nvme0", 00:31:24.937 "trtype": "tcp", 00:31:24.937 "traddr": "10.0.0.2", 00:31:24.937 "adrfam": "ipv4", 00:31:24.937 "trsvcid": "4420", 00:31:24.937 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.937 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.937 "hdgst": true, 00:31:24.937 "ddgst": true 00:31:24.937 }, 00:31:24.937 "method": "bdev_nvme_attach_controller" 00:31:24.937 }' 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:24.937 14:13:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:24.937 14:13:15 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:24.937 14:13:15 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:24.937 14:13:15 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:24.937 14:13:15 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.937 14:13:15 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:25.195 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:25.195 ... 00:31:25.195 fio-3.35 00:31:25.195 Starting 3 threads 00:31:25.454 EAL: No free 2048 kB hugepages reported on node 1 00:31:25.712 [2024-07-23 14:13:16.540662] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:25.713 [2024-07-23 14:13:16.540717] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:35.754 00:31:35.754 filename0: (groupid=0, jobs=1): err= 0: pid=3460078: Tue Jul 23 14:13:26 2024 00:31:35.754 read: IOPS=292, BW=36.5MiB/s (38.3MB/s)(367MiB/10047msec) 00:31:35.754 slat (nsec): min=6446, max=25427, avg=11043.58, stdev=1907.37 00:31:35.754 clat (usec): min=5444, max=56910, avg=10238.37, stdev=4482.59 00:31:35.754 lat (usec): min=5453, max=56922, avg=10249.41, stdev=4482.70 00:31:35.754 clat percentiles (usec): 00:31:35.754 | 1.00th=[ 6718], 5.00th=[ 7242], 10.00th=[ 7570], 20.00th=[ 8094], 00:31:35.754 | 30.00th=[ 8717], 40.00th=[ 9372], 50.00th=[10028], 60.00th=[10421], 00:31:35.754 | 70.00th=[10945], 80.00th=[11338], 90.00th=[11994], 95.00th=[12518], 00:31:35.754 | 99.00th=[16909], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:31:35.754 | 99.99th=[56886] 00:31:35.754 bw ( KiB/s): min=29696, max=44633, per=37.13%, avg=37563.20, stdev=3624.71, samples=20 00:31:35.754 iops : min= 232, max= 348, avg=293.40, stdev=28.26, samples=20 00:31:35.754 lat (msec) : 10=50.10%, 20=48.91%, 50=0.14%, 100=0.85% 00:31:35.754 cpu : usr=96.17%, sys=3.48%, ctx=19, majf=0, minf=137 00:31:35.754 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.754 issued rwts: total=2936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:35.754 filename0: (groupid=0, jobs=1): err= 0: pid=3460079: Tue Jul 23 14:13:26 2024 00:31:35.754 read: IOPS=188, BW=23.5MiB/s (24.7MB/s)(236MiB/10009msec) 00:31:35.754 slat (nsec): min=6471, max=29342, avg=11125.87, stdev=2254.65 00:31:35.754 clat (msec): min=6, max=100, avg=15.91, stdev=13.43 00:31:35.754 lat (msec): min=6, max=100, avg=15.92, stdev=13.43 00:31:35.754 clat percentiles (msec): 00:31:35.754 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 11], 00:31:35.754 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:31:35.754 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 52], 95.00th=[ 55], 00:31:35.754 | 99.00th=[ 58], 99.50th=[ 59], 99.90th=[ 60], 99.95th=[ 102], 00:31:35.754 | 99.99th=[ 102] 00:31:35.754 bw ( KiB/s): min=15329, max=31744, per=24.20%, avg=24480.05, stdev=4833.61, samples=19 00:31:35.754 iops : min= 119, max= 248, avg=191.21, stdev=37.84, samples=19 00:31:35.754 lat (msec) : 10=14.32%, 20=75.33%, 50=0.16%, 100=10.13%, 250=0.05% 00:31:35.754 cpu : usr=96.91%, sys=2.75%, ctx=13, majf=0, minf=84 00:31:35.754 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.754 issued rwts: total=1885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.754 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:35.754 filename0: (groupid=0, jobs=1): err= 0: pid=3460081: Tue Jul 23 14:13:26 2024 00:31:35.754 read: IOPS=311, BW=39.0MiB/s (40.9MB/s)(390MiB/10006msec) 00:31:35.754 slat (nsec): min=6409, max=67972, avg=11085.99, stdev=2432.41 00:31:35.754 clat (usec): min=5479, max=54787, avg=9605.78, stdev=2569.72 00:31:35.754 lat (usec): min=5488, max=54799, avg=9616.86, stdev=2569.87 00:31:35.754 clat percentiles (usec): 00:31:35.754 | 1.00th=[ 6259], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 7898], 00:31:35.755 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9634], 60.00th=[10159], 00:31:35.755 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11994], 00:31:35.755 | 99.00th=[13173], 99.50th=[14353], 99.90th=[53740], 99.95th=[54789], 00:31:35.755 | 99.99th=[54789] 00:31:35.755 bw ( KiB/s): min=28672, max=44544, per=39.39%, avg=39846.32, stdev=3687.38, samples=19 00:31:35.755 iops : min= 224, max= 348, avg=311.26, stdev=28.76, samples=19 00:31:35.755 lat (msec) : 10=57.63%, 20=42.08%, 50=0.10%, 100=0.19% 00:31:35.755 cpu : usr=95.96%, sys=3.65%, ctx=17, majf=0, minf=244 00:31:35.755 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.755 issued rwts: total=3120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.755 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:35.755 00:31:35.755 Run status group 0 (all jobs): 00:31:35.755 READ: bw=98.8MiB/s (104MB/s), 23.5MiB/s-39.0MiB/s (24.7MB/s-40.9MB/s), io=993MiB (1041MB), run=10006-10047msec 00:31:36.014 14:13:26 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:36.014 14:13:26 -- target/dif.sh@43 -- # local sub 00:31:36.014 14:13:26 -- target/dif.sh@45 -- # for sub in "$@" 00:31:36.014 14:13:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:36.014 14:13:26 -- target/dif.sh@36 -- # local sub_id=0 00:31:36.014 14:13:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.014 14:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.014 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:31:36.014 14:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.015 14:13:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:36.015 14:13:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:36.015 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:31:36.015 14:13:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:36.015 00:31:36.015 real 0m11.122s 00:31:36.015 user 0m35.981s 00:31:36.015 sys 0m1.260s 00:31:36.015 14:13:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:36.015 14:13:26 -- common/autotest_common.sh@10 -- # set +x 00:31:36.015 ************************************ 00:31:36.015 END TEST fio_dif_digest 00:31:36.015 ************************************ 00:31:36.015 14:13:26 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:36.015 14:13:26 -- target/dif.sh@147 -- # nvmftestfini 00:31:36.015 14:13:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:36.015 14:13:26 -- nvmf/common.sh@116 -- # sync 00:31:36.015 14:13:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:36.015 14:13:26 -- nvmf/common.sh@119 -- # set +e 00:31:36.015 14:13:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:36.015 14:13:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:36.015 rmmod nvme_tcp 00:31:36.015 rmmod nvme_fabrics 00:31:36.015 rmmod nvme_keyring 00:31:36.275 14:13:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:36.275 14:13:27 -- nvmf/common.sh@123 -- # set -e 00:31:36.275 14:13:27 -- nvmf/common.sh@124 -- # return 0 00:31:36.275 14:13:27 -- nvmf/common.sh@477 -- # '[' -n 3451454 ']' 00:31:36.275 14:13:27 -- nvmf/common.sh@478 -- # killprocess 3451454 00:31:36.275 14:13:27 -- common/autotest_common.sh@926 -- # '[' -z 3451454 ']' 00:31:36.275 14:13:27 -- common/autotest_common.sh@930 -- # kill -0 3451454 00:31:36.275 14:13:27 -- common/autotest_common.sh@931 -- # uname 00:31:36.275 14:13:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:36.275 14:13:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3451454 00:31:36.275 14:13:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:36.275 14:13:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:36.275 14:13:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3451454' 00:31:36.275 killing process with pid 3451454 00:31:36.275 14:13:27 -- common/autotest_common.sh@945 -- # kill 3451454 00:31:36.275 14:13:27 -- common/autotest_common.sh@950 -- # wait 3451454 00:31:36.275 14:13:27 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:36.275 14:13:27 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:39.567 Waiting for block devices as requested 00:31:39.567 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:31:39.567 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:39.567 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:31:39.826 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:31:39.826 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:31:39.826 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:31:40.086 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:31:40.086 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:31:40.086 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:31:40.086 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:31:40.345 14:13:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:40.345 14:13:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:40.345 14:13:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:40.345 14:13:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:40.345 14:13:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.345 14:13:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:40.345 14:13:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.250 14:13:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:31:42.250 00:31:42.250 real 1m13.235s 00:31:42.250 user 7m8.089s 00:31:42.250 sys 0m18.454s 00:31:42.250 14:13:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:42.250 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:31:42.250 ************************************ 00:31:42.250 END TEST nvmf_dif 00:31:42.250 ************************************ 00:31:42.250 14:13:33 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:42.250 14:13:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:42.250 14:13:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:42.250 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:31:42.250 ************************************ 00:31:42.250 START TEST nvmf_abort_qd_sizes 00:31:42.250 ************************************ 00:31:42.250 14:13:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:42.509 * Looking for test storage... 00:31:42.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:42.509 14:13:33 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.509 14:13:33 -- nvmf/common.sh@7 -- # uname -s 00:31:42.509 14:13:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.509 14:13:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.509 14:13:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.509 14:13:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.509 14:13:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.509 14:13:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.509 14:13:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.509 14:13:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.509 14:13:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.509 14:13:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.509 14:13:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:42.509 14:13:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:42.509 14:13:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.509 14:13:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.509 14:13:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.509 14:13:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.509 14:13:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.509 14:13:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.509 14:13:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.509 14:13:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.509 14:13:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.509 14:13:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.509 14:13:33 -- paths/export.sh@5 -- # export PATH 00:31:42.509 14:13:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.509 14:13:33 -- nvmf/common.sh@46 -- # : 0 00:31:42.509 14:13:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:31:42.509 14:13:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:31:42.509 14:13:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:31:42.509 14:13:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.509 14:13:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.509 14:13:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:31:42.509 14:13:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:31:42.509 14:13:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:31:42.509 14:13:33 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:31:42.509 14:13:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:31:42.509 14:13:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.509 14:13:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:31:42.509 14:13:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:31:42.509 14:13:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:31:42.509 14:13:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.509 14:13:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:42.509 14:13:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.509 14:13:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:31:42.509 14:13:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:31:42.509 14:13:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:31:42.509 14:13:33 -- common/autotest_common.sh@10 -- # set +x 00:31:47.787 14:13:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:31:47.787 14:13:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:31:47.787 14:13:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:31:47.787 14:13:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:31:47.787 14:13:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:31:47.787 14:13:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:31:47.787 14:13:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:31:47.787 14:13:38 -- nvmf/common.sh@294 -- # net_devs=() 00:31:47.787 14:13:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:31:47.787 14:13:38 -- nvmf/common.sh@295 -- # e810=() 00:31:47.787 14:13:38 -- nvmf/common.sh@295 -- # local -ga e810 00:31:47.787 14:13:38 -- nvmf/common.sh@296 -- # x722=() 00:31:47.787 14:13:38 -- nvmf/common.sh@296 -- # local -ga x722 00:31:47.787 14:13:38 -- nvmf/common.sh@297 -- # mlx=() 00:31:47.787 14:13:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:31:47.787 14:13:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:47.787 14:13:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:31:47.787 14:13:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:31:47.787 14:13:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:31:47.787 14:13:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:47.787 14:13:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:47.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:47.787 14:13:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:31:47.787 14:13:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:47.787 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:47.787 14:13:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:31:47.787 14:13:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:47.787 14:13:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.787 14:13:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:47.787 14:13:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.787 14:13:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:47.787 Found net devices under 0000:86:00.0: cvl_0_0 00:31:47.787 14:13:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.787 14:13:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:31:47.787 14:13:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:47.787 14:13:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:31:47.787 14:13:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:47.787 14:13:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:47.787 Found net devices under 0000:86:00.1: cvl_0_1 00:31:47.787 14:13:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:31:47.787 14:13:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:31:47.787 14:13:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:31:47.787 14:13:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:31:47.787 14:13:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:31:47.787 14:13:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:47.787 14:13:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:47.787 14:13:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:47.787 14:13:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:31:47.787 14:13:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:47.787 14:13:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:47.787 14:13:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:31:47.787 14:13:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:47.787 14:13:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:47.787 14:13:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:31:47.787 14:13:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:31:47.787 14:13:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:31:47.787 14:13:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:47.787 14:13:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:47.787 14:13:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:47.787 14:13:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:31:47.787 14:13:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:47.787 14:13:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:47.788 14:13:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:47.788 14:13:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:31:47.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:47.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:31:47.788 00:31:47.788 --- 10.0.0.2 ping statistics --- 00:31:47.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.788 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:31:47.788 14:13:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:47.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:47.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:31:47.788 00:31:47.788 --- 10.0.0.1 ping statistics --- 00:31:47.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:47.788 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:31:47.788 14:13:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:47.788 14:13:38 -- nvmf/common.sh@410 -- # return 0 00:31:47.788 14:13:38 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:31:47.788 14:13:38 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:50.345 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:31:50.345 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:31:51.281 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:31:51.540 14:13:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:51.540 14:13:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:31:51.540 14:13:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:31:51.540 14:13:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:51.540 14:13:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:31:51.540 14:13:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:31:51.540 14:13:42 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:31:51.540 14:13:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:31:51.540 14:13:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:31:51.540 14:13:42 -- common/autotest_common.sh@10 -- # set +x 00:31:51.540 14:13:42 -- nvmf/common.sh@469 -- # nvmfpid=3468065 00:31:51.540 14:13:42 -- nvmf/common.sh@470 -- # waitforlisten 3468065 00:31:51.540 14:13:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:31:51.540 14:13:42 -- common/autotest_common.sh@819 -- # '[' -z 3468065 ']' 00:31:51.540 14:13:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:51.540 14:13:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:31:51.540 14:13:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:51.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:51.540 14:13:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:31:51.540 14:13:42 -- common/autotest_common.sh@10 -- # set +x 00:31:51.540 [2024-07-23 14:13:42.424159] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:31:51.540 [2024-07-23 14:13:42.424201] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:51.540 EAL: No free 2048 kB hugepages reported on node 1 00:31:51.540 [2024-07-23 14:13:42.489851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:51.800 [2024-07-23 14:13:42.591704] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:31:51.800 [2024-07-23 14:13:42.591825] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:51.800 [2024-07-23 14:13:42.591837] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:51.800 [2024-07-23 14:13:42.591844] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:51.800 [2024-07-23 14:13:42.591889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:51.800 [2024-07-23 14:13:42.591990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:51.800 [2024-07-23 14:13:42.592072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:51.800 [2024-07-23 14:13:42.592076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:52.368 14:13:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:31:52.368 14:13:43 -- common/autotest_common.sh@852 -- # return 0 00:31:52.368 14:13:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:31:52.368 14:13:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:31:52.368 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:31:52.368 14:13:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:31:52.368 14:13:43 -- scripts/common.sh@311 -- # local bdf bdfs 00:31:52.368 14:13:43 -- scripts/common.sh@312 -- # local nvmes 00:31:52.368 14:13:43 -- scripts/common.sh@314 -- # [[ -n 0000:5e:00.0 ]] 00:31:52.368 14:13:43 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:31:52.368 14:13:43 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:31:52.368 14:13:43 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:31:52.368 14:13:43 -- scripts/common.sh@322 -- # uname -s 00:31:52.368 14:13:43 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:31:52.368 14:13:43 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:31:52.368 14:13:43 -- scripts/common.sh@327 -- # (( 1 )) 00:31:52.368 14:13:43 -- scripts/common.sh@328 -- # printf '%s\n' 0000:5e:00.0 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:5e:00.0 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:31:52.368 14:13:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:52.368 14:13:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:52.368 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:31:52.368 ************************************ 00:31:52.368 START TEST spdk_target_abort 00:31:52.368 ************************************ 00:31:52.368 14:13:43 -- common/autotest_common.sh@1104 -- # spdk_target 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:52.368 14:13:43 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:31:52.368 14:13:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:52.368 14:13:43 -- common/autotest_common.sh@10 -- # set +x 00:31:55.685 spdk_targetn1 00:31:55.685 14:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.685 14:13:46 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:55.685 14:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.685 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:55.685 [2024-07-23 14:13:46.120182] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:55.685 14:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.685 14:13:46 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:31:55.685 14:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.686 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:55.686 14:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:31:55.686 14:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.686 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:55.686 14:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:31:55.686 14:13:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:55.686 14:13:46 -- common/autotest_common.sh@10 -- # set +x 00:31:55.686 [2024-07-23 14:13:46.153142] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:55.686 14:13:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:55.686 14:13:46 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:55.686 EAL: No free 2048 kB hugepages reported on node 1 00:31:58.979 Initializing NVMe Controllers 00:31:58.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:31:58.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:31:58.979 Initialization complete. Launching workers. 00:31:58.979 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 5796, failed: 0 00:31:58.979 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1502, failed to submit 4294 00:31:58.979 success 909, unsuccess 593, failed 0 00:31:58.979 14:13:49 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:58.979 14:13:49 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:31:58.979 EAL: No free 2048 kB hugepages reported on node 1 00:32:02.272 [2024-07-23 14:13:52.615107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.272 [2024-07-23 14:13:52.615196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 [2024-07-23 14:13:52.615240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b248a0 is same with the state(5) to be set 00:32:02.273 Initializing NVMe Controllers 00:32:02.273 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:02.273 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:02.273 Initialization complete. Launching workers. 00:32:02.273 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8669, failed: 0 00:32:02.273 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1227, failed to submit 7442 00:32:02.273 success 375, unsuccess 852, failed 0 00:32:02.273 14:13:52 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:02.273 14:13:52 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:02.273 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.595 Initializing NVMe Controllers 00:32:05.596 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:05.596 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:05.596 Initialization complete. Launching workers. 00:32:05.596 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 34566, failed: 0 00:32:05.596 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2771, failed to submit 31795 00:32:05.596 success 742, unsuccess 2029, failed 0 00:32:05.596 14:13:55 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:05.596 14:13:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.596 14:13:55 -- common/autotest_common.sh@10 -- # set +x 00:32:05.596 14:13:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:05.596 14:13:56 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:05.596 14:13:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:05.596 14:13:56 -- common/autotest_common.sh@10 -- # set +x 00:32:06.535 14:13:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:06.535 14:13:57 -- target/abort_qd_sizes.sh@62 -- # killprocess 3468065 00:32:06.535 14:13:57 -- common/autotest_common.sh@926 -- # '[' -z 3468065 ']' 00:32:06.535 14:13:57 -- common/autotest_common.sh@930 -- # kill -0 3468065 00:32:06.535 14:13:57 -- common/autotest_common.sh@931 -- # uname 00:32:06.535 14:13:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:06.535 14:13:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 3468065 00:32:06.535 14:13:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:06.535 14:13:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:06.535 14:13:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 3468065' 00:32:06.535 killing process with pid 3468065 00:32:06.535 14:13:57 -- common/autotest_common.sh@945 -- # kill 3468065 00:32:06.535 14:13:57 -- common/autotest_common.sh@950 -- # wait 3468065 00:32:06.795 00:32:06.795 real 0m14.270s 00:32:06.795 user 0m56.701s 00:32:06.795 sys 0m2.166s 00:32:06.795 14:13:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:06.795 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:32:06.795 ************************************ 00:32:06.795 END TEST spdk_target_abort 00:32:06.795 ************************************ 00:32:06.795 14:13:57 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:06.795 14:13:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:06.795 14:13:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:06.795 14:13:57 -- common/autotest_common.sh@10 -- # set +x 00:32:06.795 ************************************ 00:32:06.795 START TEST kernel_target_abort 00:32:06.795 ************************************ 00:32:06.795 14:13:57 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:06.795 14:13:57 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:06.795 14:13:57 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:06.795 14:13:57 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:06.795 14:13:57 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:06.795 14:13:57 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:06.795 14:13:57 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:06.795 14:13:57 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:06.795 14:13:57 -- nvmf/common.sh@627 -- # local block nvme 00:32:06.795 14:13:57 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:06.795 14:13:57 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:06.795 14:13:57 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:06.795 14:13:57 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:09.334 Waiting for block devices as requested 00:32:09.334 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:32:09.334 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:09.334 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:09.594 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:09.594 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:09.594 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:09.594 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:09.853 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:09.853 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:09.853 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:32:10.113 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:32:10.113 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:32:10.113 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:32:10.113 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:32:10.372 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:32:10.372 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:32:10.372 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:32:10.372 14:14:01 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:10.372 14:14:01 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:10.372 14:14:01 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:10.372 14:14:01 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:10.372 14:14:01 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:10.632 No valid GPT data, bailing 00:32:10.632 14:14:01 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:10.632 14:14:01 -- scripts/common.sh@393 -- # pt= 00:32:10.632 14:14:01 -- scripts/common.sh@394 -- # return 1 00:32:10.632 14:14:01 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:10.632 14:14:01 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:10.632 14:14:01 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:10.632 14:14:01 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:10.632 14:14:01 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:10.632 14:14:01 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:10.632 14:14:01 -- nvmf/common.sh@654 -- # echo 1 00:32:10.632 14:14:01 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:10.632 14:14:01 -- nvmf/common.sh@656 -- # echo 1 00:32:10.632 14:14:01 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:10.632 14:14:01 -- nvmf/common.sh@663 -- # echo tcp 00:32:10.632 14:14:01 -- nvmf/common.sh@664 -- # echo 4420 00:32:10.632 14:14:01 -- nvmf/common.sh@665 -- # echo ipv4 00:32:10.632 14:14:01 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:10.632 14:14:01 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:32:10.632 00:32:10.632 Discovery Log Number of Records 2, Generation counter 2 00:32:10.632 =====Discovery Log Entry 0====== 00:32:10.632 trtype: tcp 00:32:10.632 adrfam: ipv4 00:32:10.632 subtype: current discovery subsystem 00:32:10.632 treq: not specified, sq flow control disable supported 00:32:10.632 portid: 1 00:32:10.632 trsvcid: 4420 00:32:10.632 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:10.632 traddr: 10.0.0.1 00:32:10.632 eflags: none 00:32:10.632 sectype: none 00:32:10.632 =====Discovery Log Entry 1====== 00:32:10.632 trtype: tcp 00:32:10.632 adrfam: ipv4 00:32:10.632 subtype: nvme subsystem 00:32:10.632 treq: not specified, sq flow control disable supported 00:32:10.632 portid: 1 00:32:10.632 trsvcid: 4420 00:32:10.632 subnqn: kernel_target 00:32:10.632 traddr: 10.0.0.1 00:32:10.632 eflags: none 00:32:10.632 sectype: none 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.632 14:14:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:10.632 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.927 Initializing NVMe Controllers 00:32:13.927 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:13.927 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:13.927 Initialization complete. Launching workers. 00:32:13.927 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 38156, failed: 0 00:32:13.927 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 38156, failed to submit 0 00:32:13.927 success 0, unsuccess 38156, failed 0 00:32:13.927 14:14:04 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:13.927 14:14:04 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:13.927 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.219 Initializing NVMe Controllers 00:32:17.219 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:17.219 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:17.219 Initialization complete. Launching workers. 00:32:17.219 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 76926, failed: 0 00:32:17.219 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 19422, failed to submit 57504 00:32:17.219 success 0, unsuccess 19422, failed 0 00:32:17.219 14:14:07 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.219 14:14:07 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:17.219 EAL: No free 2048 kB hugepages reported on node 1 00:32:19.820 Initializing NVMe Controllers 00:32:19.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:19.820 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:19.820 Initialization complete. Launching workers. 00:32:19.820 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 74793, failed: 0 00:32:19.820 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18670, failed to submit 56123 00:32:19.820 success 0, unsuccess 18670, failed 0 00:32:19.820 14:14:10 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:19.820 14:14:10 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:19.820 14:14:10 -- nvmf/common.sh@677 -- # echo 0 00:32:19.820 14:14:10 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:19.820 14:14:10 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:19.820 14:14:10 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:19.820 14:14:10 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:19.820 14:14:10 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:19.820 14:14:10 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:19.820 00:32:19.820 real 0m13.129s 00:32:19.820 user 0m4.017s 00:32:19.820 sys 0m3.747s 00:32:19.820 14:14:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:19.820 14:14:10 -- common/autotest_common.sh@10 -- # set +x 00:32:19.820 ************************************ 00:32:19.820 END TEST kernel_target_abort 00:32:19.820 ************************************ 00:32:19.820 14:14:10 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:19.820 14:14:10 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:19.820 14:14:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:19.820 14:14:10 -- nvmf/common.sh@116 -- # sync 00:32:19.820 14:14:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:19.820 14:14:10 -- nvmf/common.sh@119 -- # set +e 00:32:19.820 14:14:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:19.820 14:14:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:19.820 rmmod nvme_tcp 00:32:19.820 rmmod nvme_fabrics 00:32:19.820 rmmod nvme_keyring 00:32:19.820 14:14:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:19.820 14:14:10 -- nvmf/common.sh@123 -- # set -e 00:32:19.820 14:14:10 -- nvmf/common.sh@124 -- # return 0 00:32:19.820 14:14:10 -- nvmf/common.sh@477 -- # '[' -n 3468065 ']' 00:32:19.820 14:14:10 -- nvmf/common.sh@478 -- # killprocess 3468065 00:32:19.820 14:14:10 -- common/autotest_common.sh@926 -- # '[' -z 3468065 ']' 00:32:19.820 14:14:10 -- common/autotest_common.sh@930 -- # kill -0 3468065 00:32:19.820 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (3468065) - No such process 00:32:19.820 14:14:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 3468065 is not found' 00:32:19.820 Process with pid 3468065 is not found 00:32:19.820 14:14:10 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:19.820 14:14:10 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:23.113 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:23.113 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:23.113 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:23.113 14:14:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:23.113 14:14:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:23.113 14:14:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:23.113 14:14:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:23.113 14:14:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.113 14:14:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:23.113 14:14:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:25.019 14:14:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:25.019 00:32:25.019 real 0m42.601s 00:32:25.019 user 1m4.755s 00:32:25.019 sys 0m13.832s 00:32:25.019 14:14:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:25.019 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:32:25.019 ************************************ 00:32:25.019 END TEST nvmf_abort_qd_sizes 00:32:25.019 ************************************ 00:32:25.019 14:14:15 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:25.019 14:14:15 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:25.019 14:14:15 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:25.019 14:14:15 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:25.019 14:14:15 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:25.019 14:14:15 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:25.019 14:14:15 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:25.019 14:14:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:25.019 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:32:25.019 14:14:15 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:25.019 14:14:15 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:25.019 14:14:15 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:25.019 14:14:15 -- common/autotest_common.sh@10 -- # set +x 00:32:29.212 INFO: APP EXITING 00:32:29.212 INFO: killing all VMs 00:32:29.212 INFO: killing vhost app 00:32:29.212 INFO: EXIT DONE 00:32:31.744 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:32:31.744 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:32:31.744 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:32:32.002 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:32:34.537 Cleaning 00:32:34.537 Removing: /var/run/dpdk/spdk0/config 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:34.537 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:34.537 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:34.537 Removing: /var/run/dpdk/spdk1/config 00:32:34.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:34.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:34.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:34.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:34.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:34.537 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:34.796 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:34.796 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:34.796 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:34.796 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:34.796 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:34.796 Removing: /var/run/dpdk/spdk2/config 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:34.796 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:34.796 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:34.796 Removing: /var/run/dpdk/spdk3/config 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:34.797 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:34.797 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:34.797 Removing: /var/run/dpdk/spdk4/config 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:34.797 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:34.797 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:34.797 Removing: /dev/shm/bdev_svc_trace.1 00:32:34.797 Removing: /dev/shm/nvmf_trace.0 00:32:34.797 Removing: /dev/shm/spdk_tgt_trace.pid3076186 00:32:34.797 Removing: /var/run/dpdk/spdk0 00:32:34.797 Removing: /var/run/dpdk/spdk1 00:32:34.797 Removing: /var/run/dpdk/spdk2 00:32:34.797 Removing: /var/run/dpdk/spdk3 00:32:34.797 Removing: /var/run/dpdk/spdk4 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3074017 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3075116 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3076186 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3076855 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3078392 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3079674 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3079965 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3080248 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3080555 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3080839 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3081092 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3081352 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3081628 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3082605 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3085499 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3085858 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3086165 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3086186 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3086677 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3086907 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3087213 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3087420 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3087680 00:32:34.797 Removing: /var/run/dpdk/spdk_pid3087915 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3088055 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3088187 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3088746 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3088997 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3089289 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3089555 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3089587 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3089641 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3089873 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3090131 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3090363 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3090616 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3090854 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3091107 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3091339 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3091594 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3091830 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3092083 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3092318 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3092565 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3092812 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3093059 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3093294 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3093547 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3093784 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3094031 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3094269 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3094518 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3094804 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3095133 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3095369 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3095618 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3095861 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3096111 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3096585 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3096989 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3097223 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3097471 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3097711 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3097958 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3098198 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3098458 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3098695 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3098947 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3099189 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3099436 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3099676 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3099929 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3100181 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3100516 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3104158 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3185726 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3189992 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3200106 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3205325 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3209608 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3210301 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3218887 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3219299 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3223438 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3229861 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3232493 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3242768 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3251720 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3253584 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3254516 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3271258 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3275063 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3280116 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3281748 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3283621 00:32:35.056 Removing: /var/run/dpdk/spdk_pid3283870 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3284108 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3284347 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3284876 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3286825 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3287751 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3288308 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3293957 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3299545 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3304486 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3341071 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3344961 00:32:35.315 Removing: /var/run/dpdk/spdk_pid3350996 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3352323 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3353892 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3358742 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3362802 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3370193 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3370256 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3374932 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3375096 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3375257 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3375649 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3375654 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3377076 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3378839 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3380554 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3382180 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3383808 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3385436 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3391345 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3391921 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3393699 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3394720 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3400635 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3403830 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3409088 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3414819 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3420617 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3421327 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3421993 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3422593 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3423508 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3424220 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3424883 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3425425 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3429718 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3429954 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3435848 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3436114 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3438360 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3446484 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3446489 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3451553 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3453555 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3455546 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3456829 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3458643 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3459908 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3468746 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3469216 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3469816 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3472129 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3472671 00:32:35.316 Removing: /var/run/dpdk/spdk_pid3473147 00:32:35.316 Clean 00:32:35.575 killing process with pid 3028933 00:32:43.694 killing process with pid 3028930 00:32:43.694 killing process with pid 3028932 00:32:43.694 killing process with pid 3028931 00:32:43.694 14:14:33 -- common/autotest_common.sh@1436 -- # return 0 00:32:43.694 14:14:33 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:43.694 14:14:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.694 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:32:43.694 14:14:33 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:43.694 14:14:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:43.694 14:14:33 -- common/autotest_common.sh@10 -- # set +x 00:32:43.694 14:14:33 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:43.694 14:14:33 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:43.694 14:14:33 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:43.694 14:14:33 -- spdk/autotest.sh@394 -- # hash lcov 00:32:43.694 14:14:33 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:43.694 14:14:33 -- spdk/autotest.sh@396 -- # hostname 00:32:43.695 14:14:33 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:43.695 geninfo: WARNING: invalid characters removed from testname! 00:33:01.821 14:14:50 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:01.821 14:14:52 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:03.201 14:14:54 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:05.108 14:14:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:06.490 14:14:57 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.398 14:14:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:09.777 14:15:00 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:10.037 14:15:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:10.037 14:15:00 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:10.037 14:15:00 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:10.037 14:15:00 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:10.037 14:15:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.037 14:15:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.037 14:15:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.037 14:15:00 -- paths/export.sh@5 -- $ export PATH 00:33:10.037 14:15:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:10.037 14:15:00 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:10.037 14:15:00 -- common/autobuild_common.sh@438 -- $ date +%s 00:33:10.037 14:15:00 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721736900.XXXXXX 00:33:10.037 14:15:00 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721736900.mDLAQr 00:33:10.037 14:15:00 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:33:10.037 14:15:00 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:33:10.037 14:15:00 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:10.037 14:15:00 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:10.037 14:15:00 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:10.037 14:15:00 -- common/autobuild_common.sh@454 -- $ get_config_params 00:33:10.037 14:15:00 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:10.037 14:15:00 -- common/autotest_common.sh@10 -- $ set +x 00:33:10.037 14:15:00 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:33:10.037 14:15:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:33:10.037 14:15:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:10.037 14:15:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:10.037 14:15:00 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:10.037 14:15:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:10.037 14:15:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:10.037 14:15:00 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:10.037 14:15:00 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:10.037 14:15:00 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:10.037 14:15:00 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:10.037 + [[ -n 2986637 ]] 00:33:10.037 + sudo kill 2986637 00:33:10.047 [Pipeline] } 00:33:10.064 [Pipeline] // stage 00:33:10.069 [Pipeline] } 00:33:10.087 [Pipeline] // timeout 00:33:10.092 [Pipeline] } 00:33:10.109 [Pipeline] // catchError 00:33:10.114 [Pipeline] } 00:33:10.132 [Pipeline] // wrap 00:33:10.138 [Pipeline] } 00:33:10.153 [Pipeline] // catchError 00:33:10.162 [Pipeline] stage 00:33:10.165 [Pipeline] { (Epilogue) 00:33:10.178 [Pipeline] catchError 00:33:10.180 [Pipeline] { 00:33:10.192 [Pipeline] echo 00:33:10.194 Cleanup processes 00:33:10.200 [Pipeline] sh 00:33:10.489 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:10.489 3486133 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:10.503 [Pipeline] sh 00:33:10.788 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:10.788 ++ grep -v 'sudo pgrep' 00:33:10.788 ++ awk '{print $1}' 00:33:10.788 + sudo kill -9 00:33:10.788 + true 00:33:10.800 [Pipeline] sh 00:33:11.096 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:23.403 [Pipeline] sh 00:33:23.690 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:23.690 Artifacts sizes are good 00:33:23.706 [Pipeline] archiveArtifacts 00:33:23.714 Archiving artifacts 00:33:23.918 [Pipeline] sh 00:33:24.205 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:24.220 [Pipeline] cleanWs 00:33:24.231 [WS-CLEANUP] Deleting project workspace... 00:33:24.231 [WS-CLEANUP] Deferred wipeout is used... 00:33:24.238 [WS-CLEANUP] done 00:33:24.241 [Pipeline] } 00:33:24.262 [Pipeline] // catchError 00:33:24.277 [Pipeline] sh 00:33:24.562 + logger -p user.info -t JENKINS-CI 00:33:24.572 [Pipeline] } 00:33:24.586 [Pipeline] // stage 00:33:24.592 [Pipeline] } 00:33:24.606 [Pipeline] // node 00:33:24.611 [Pipeline] End of Pipeline 00:33:24.648 Finished: SUCCESS